Unnamed: 0,id,type,created_at,repo,repo_url,action,title,labels,body,index,text_combine,label,text,binary_label
1871,27656971514.0,IssuesEvent,2023-03-12 03:33:45,pygame/pygame,https://api.github.com/repos/pygame/pygame,closed,Refactor/Unify CPU dispatch tasks,misc portability,"- [ ] make tests CPU-aware: test all available impls in test suite (CPU arch permitting)
- [ ] ensure ALL impls are built in bdist_wheel (this includes code for better than current CPU)
- [ ] Inspect generated assembly on MSVC, GCC, Clang, ICC, Apple Clang
- [ ] decide/document which compilers are supported
- [ ] document CPU dispatch guidelines
- [ ] nice to have: optionally disable dispatch for arch-specific builds, hard-code best path (for Gentoo users)
- [ ] nice to have: optionally disable all dispatch in setup.py
- [ ] either: annotate arch-specific functions (compiler-specific)
- [ ] or: move arch-specific functions into different files (add arch to Setup.in)
- [ ] maybe: annotate certain functions with identical code but different target arch, trust C compiler to vectorise
- [ ] x86
- [ ] x86_64
- [ ] ARMv6
- [ ] ARM64
- [ ] ensure backward compatibility with most C99 compilers",True,"Refactor/Unify CPU dispatch tasks - - [ ] make tests CPU-aware: test all available impls in test suite (CPU arch permitting)
- [ ] ensure ALL impls are built in bdist_wheel (this includes code for better than current CPU)
- [ ] Inspect generated assembly on MSVC, GCC, Clang, ICC, Apple Clang
- [ ] decide/document which compilers are supported
- [ ] document CPU dispatch guidelines
- [ ] nice to have: optionally disable dispatch for arch-specific builds, hard-code best path (for Gentoo users)
- [ ] nice to have: optionally disable all dispatch in setup.py
- [ ] either: annotate arch-specific functions (compiler-specific)
- [ ] or: move arch-specific functions into different files (add arch to Setup.in)
- [ ] maybe: annotate certain functions with identical code but different target arch, trust C compiler to vectorise
- [ ] x86
- [ ] x86_64
- [ ] ARMv6
- [ ] ARM64
- [ ] ensure backward compatibility with most C99 compilers",1,refactor unify cpu dispatch tasks make tests cpu aware test all available impls in test suite cpu arch permitting ensure all impls are built in bdist wheel this includes code for better than current cpu inspect generated assembly on msvc gcc clang icc apple clang decide document which compilers are supported document cpu dispatch guidelines nice to have optionally disable dispatch for arch specific builds hard code best path for gentoo users nice to have optionally disable all dispatch in setup py either annotate arch specific functions compiler specific or move arch specific functions into different files add arch to setup in maybe annotate certain functions with identical code but different target arch trust c compiler to vectorise ensure backward compatibility with most compilers,1
938,12300753513.0,IssuesEvent,2020-05-11 14:26:37,ocaml/opam,https://api.github.com/repos/ocaml/opam,closed,bubblewrap on Ubuntu 16.04,AREA: PORTABILITY,`bubblewrap` is not available in Ubuntu 16.04's package repository. What is the right approach to fixing/addressing this for users?,True,bubblewrap on Ubuntu 16.04 - `bubblewrap` is not available in Ubuntu 16.04's package repository. What is the right approach to fixing/addressing this for users?,1,bubblewrap on ubuntu bubblewrap is not available in ubuntu s package repository what is the right approach to fixing addressing this for users ,1
120239,25762941059.0,IssuesEvent,2022-12-08 22:16:50,ajwalkiewicz/cochar,https://api.github.com/repos/ajwalkiewicz/cochar,closed,Simplify Character class,code improvement,"Currently Character class is full of boiler plate, a lot of setters and getters that does same think. Maybe creating custom descriptors would simplify this.
Link: https://docs.python.org/3/howto/descriptor.html",1.0,"Simplify Character class - Currently Character class is full of boiler plate, a lot of setters and getters that does same think. Maybe creating custom descriptors would simplify this.
Link: https://docs.python.org/3/howto/descriptor.html",0,simplify character class currently character class is full of boiler plate a lot of setters and getters that does same think maybe creating custom descriptors would simplify this link ,0
88202,15800748064.0,IssuesEvent,2021-04-03 01:06:40,hammondjm/sql,https://api.github.com/repos/hammondjm/sql,opened,CVE-2020-27216 (High) detected in jetty-webapp-9.2.24.v20180105.jar,security vulnerability,"## CVE-2020-27216 - High Severity Vulnerability
Vulnerable Library - jetty-webapp-9.2.24.v20180105.jar
Jetty web application support
Library home page: http://www.eclipse.org/jetty
Path to dependency file: sql/sql-jdbc/build.gradle
Path to vulnerable library: /tmp/ws-ua_20201005172659_RQIDJG/downloadResource_PFXGFH/20201005172921/jetty-webapp-9.2.24.v20180105.jar
Dependency Hierarchy:
- wiremock-2.20.0.jar (Root Library)
- :x: **jetty-webapp-9.2.24.v20180105.jar** (Vulnerable Library)
Found in base branch: master
Vulnerability Details
In Eclipse Jetty versions 1.0 thru 9.4.32.v20200930, 10.0.0.alpha1 thru 10.0.0.beta2, and 11.0.0.alpha1 thru 11.0.0.beta2O, on Unix like systems, the system's temporary directory is shared between all users on that system. A collocated user can observe the process of creating a temporary sub directory in the shared temporary directory and race to complete the creation of the temporary subdirectory. If the attacker wins the race then they will have read and write permission to the subdirectory used to unpack web applications, including their WEB-INF/lib jar files and JSP files. If any code is ever executed out of this temporary directory, this can lead to a local privilege escalation vulnerability.
Publish Date: 2020-10-23
URL: CVE-2020-27216
CVSS 3 Score Details (7.0 )
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
For more information on CVSS3 Scores, click here .
Suggested Fix
Type: Upgrade version
Origin: https://bugs.eclipse.org/bugs/show_bug.cgi?id=567921
Release Date: 2020-10-20
Fix Resolution: org.eclipse.jetty:jetty-runner:9.4.33,10.0.0.beta3,11.0.0.beta3;org.eclipse.jetty:jetty-webapp:9.4.33,10.0.0.beta3,11.0.0.beta3
",True,"CVE-2020-27216 (High) detected in jetty-webapp-9.2.24.v20180105.jar - ## CVE-2020-27216 - High Severity Vulnerability
Vulnerable Library - jetty-webapp-9.2.24.v20180105.jar
Jetty web application support
Library home page: http://www.eclipse.org/jetty
Path to dependency file: sql/sql-jdbc/build.gradle
Path to vulnerable library: /tmp/ws-ua_20201005172659_RQIDJG/downloadResource_PFXGFH/20201005172921/jetty-webapp-9.2.24.v20180105.jar
Dependency Hierarchy:
- wiremock-2.20.0.jar (Root Library)
- :x: **jetty-webapp-9.2.24.v20180105.jar** (Vulnerable Library)
Found in base branch: master
Vulnerability Details
In Eclipse Jetty versions 1.0 thru 9.4.32.v20200930, 10.0.0.alpha1 thru 10.0.0.beta2, and 11.0.0.alpha1 thru 11.0.0.beta2O, on Unix like systems, the system's temporary directory is shared between all users on that system. A collocated user can observe the process of creating a temporary sub directory in the shared temporary directory and race to complete the creation of the temporary subdirectory. If the attacker wins the race then they will have read and write permission to the subdirectory used to unpack web applications, including their WEB-INF/lib jar files and JSP files. If any code is ever executed out of this temporary directory, this can lead to a local privilege escalation vulnerability.
Publish Date: 2020-10-23
URL: CVE-2020-27216
CVSS 3 Score Details (7.0 )
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
For more information on CVSS3 Scores, click here .
Suggested Fix
Type: Upgrade version
Origin: https://bugs.eclipse.org/bugs/show_bug.cgi?id=567921
Release Date: 2020-10-20
Fix Resolution: org.eclipse.jetty:jetty-runner:9.4.33,10.0.0.beta3,11.0.0.beta3;org.eclipse.jetty:jetty-webapp:9.4.33,10.0.0.beta3,11.0.0.beta3
",0,cve high detected in jetty webapp jar cve high severity vulnerability vulnerable library jetty webapp jar jetty web application support library home page a href path to dependency file sql sql jdbc build gradle path to vulnerable library tmp ws ua rqidjg downloadresource pfxgfh jetty webapp jar dependency hierarchy wiremock jar root library x jetty webapp jar vulnerable library found in base branch master vulnerability details in eclipse jetty versions thru thru and thru on unix like systems the system s temporary directory is shared between all users on that system a collocated user can observe the process of creating a temporary sub directory in the shared temporary directory and race to complete the creation of the temporary subdirectory if the attacker wins the race then they will have read and write permission to the subdirectory used to unpack web applications including their web inf lib jar files and jsp files if any code is ever executed out of this temporary directory this can lead to a local privilege escalation vulnerability publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity high privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org eclipse jetty jetty runner org eclipse jetty jetty webapp isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree com github tomakehurst wiremock org eclipse jetty jetty webapp isminimumfixversionavailable true minimumfixversion org eclipse jetty jetty runner org eclipse jetty jetty webapp basebranches vulnerabilityidentifier cve vulnerabilitydetails in eclipse jetty versions thru thru and thru on unix like systems the system temporary directory is shared between all users on that system a collocated user can observe the process of creating a temporary sub directory in the shared temporary directory and race to complete the creation of the temporary subdirectory if the attacker wins the race then they will have read and write permission to the subdirectory used to unpack web applications including their web inf lib jar files and jsp files if any code is ever executed out of this temporary directory this can lead to a local privilege escalation vulnerability vulnerabilityurl ,0
153678,13522917622.0,IssuesEvent,2020-09-15 09:13:23,strongbox/strongbox,https://api.github.com/repos/strongbox/strongbox,opened,Create pages for hackfests,documentation,"# Task Description
We need to add pages for hackfests, especially now, with the Grace Hopper Opensource Day and #Hacktoberfest beiing right around the corner.
# Tasks
The following tasks will need to be carried out:
* [ ] Create a new section in the wiki for Hackfests
* [ ] Create a new page for Grace Hopper Celebration Opensource Day
* [ ] Create a new page for Hacktoberfest
* [ ] Create a new generic page for hack fests
# Useful Links
# Help
* [Our chat](https://chat.carlspring.org/)
* Points of contact:
* @carlspring
* @sbespalov
* @steve-todorov
",1.0,"Create pages for hackfests - # Task Description
We need to add pages for hackfests, especially now, with the Grace Hopper Opensource Day and #Hacktoberfest beiing right around the corner.
# Tasks
The following tasks will need to be carried out:
* [ ] Create a new section in the wiki for Hackfests
* [ ] Create a new page for Grace Hopper Celebration Opensource Day
* [ ] Create a new page for Hacktoberfest
* [ ] Create a new generic page for hack fests
# Useful Links
# Help
* [Our chat](https://chat.carlspring.org/)
* Points of contact:
* @carlspring
* @sbespalov
* @steve-todorov
",0,create pages for hackfests task description we need to add pages for hackfests especially now with the grace hopper opensource day and hacktoberfest beiing right around the corner tasks the following tasks will need to be carried out create a new section in the wiki for hackfests create a new page for grace hopper celebration opensource day create a new page for hacktoberfest create a new generic page for hack fests useful links help points of contact carlspring sbespalov steve todorov ,0
1264,16751076476.0,IssuesEvent,2021-06-11 23:37:31,chapel-lang/chapel,https://api.github.com/repos/chapel-lang/chapel,opened,Chapel on Macs with M1 chips,area: Compiler area: Runtime type: Portability user issue,"This issue generally asks ""How is Chapel doing on Macs with M1 chips?"" where I think we mostly don't have much experience within the core team. It would be good to be able to have access to one in-house to take stock of things.
At present we know:
- [ ] we don't have a homebrew bottle for M1 Macs (#17910)
- [ ] the GASNet team is seeing a failure on their M1 Mac runs (#17825)
",True,"Chapel on Macs with M1 chips - This issue generally asks ""How is Chapel doing on Macs with M1 chips?"" where I think we mostly don't have much experience within the core team. It would be good to be able to have access to one in-house to take stock of things.
At present we know:
- [ ] we don't have a homebrew bottle for M1 Macs (#17910)
- [ ] the GASNet team is seeing a failure on their M1 Mac runs (#17825)
",1,chapel on macs with chips this issue generally asks how is chapel doing on macs with chips where i think we mostly don t have much experience within the core team it would be good to be able to have access to one in house to take stock of things at present we know we don t have a homebrew bottle for macs the gasnet team is seeing a failure on their mac runs ,1
1774,26052079880.0,IssuesEvent,2022-12-22 19:51:28,golang/vulndb,https://api.github.com/repos/golang/vulndb,closed,x/vulndb: potential Go vuln in github.com/destinygg/chat: CVE-2020-36625,excluded: NOT_IMPORTABLE,"CVE-2020-36625 references [github.com/destinygg/chat](https://github.com/destinygg/chat), which may be a Go module.
Description:
** UNSUPPORTED WHEN ASSIGNED ** A vulnerability was found in destiny.gg chat. It has been rated as problematic. This issue affects the function websocket.Upgrader of the file main.go. The manipulation leads to cross-site request forgery. The attack may be initiated remotely. The name of the patch is bebd256fc3063111fb4503ca25e005ebf6e73780. It is recommended to apply a patch to fix this issue. The identifier VDB-216521 was assigned to this vulnerability. NOTE: This vulnerability only affects products that are no longer supported by the maintainer.
References:
- NIST: https://nvd.nist.gov/vuln/detail/CVE-2020-36625
- JSON: https://github.com/CVEProject/cvelist/tree/746b542db7536ae79b7aa4e51d1d9965c12d786f/2020/36xxx/CVE-2020-36625.json
- fix: https://github.com/destinygg/chat/pull/35
- fix: https://github.com/destinygg/chat/commit/bebd256fc3063111fb4503ca25e005ebf6e73780
- web: https://vuldb.com/?id.216521
- Imported by: https://pkg.go.dev/github.com/destinygg/chat?tab=importedby
Cross references:
No existing reports found with this module or alias.
See [doc/triage.md](https://github.com/golang/vulndb/blob/master/doc/triage.md) for instructions on how to triage this report.
```
modules:
- module: github.com/destinygg/chat
packages:
- package: chat
description: |
** UNSUPPORTED WHEN ASSIGNED ** A vulnerability was found in destiny.gg chat. It has been rated as problematic. This issue affects the function websocket.Upgrader of the file main.go. The manipulation leads to cross-site request forgery. The attack may be initiated remotely. The name of the patch is bebd256fc3063111fb4503ca25e005ebf6e73780. It is recommended to apply a patch to fix this issue. The identifier VDB-216521 was assigned to this vulnerability. NOTE: This vulnerability only affects products that are no longer supported by the maintainer.
cves:
- CVE-2020-36625
references:
- fix: https://github.com/destinygg/chat/pull/35
- fix: https://github.com/destinygg/chat/commit/bebd256fc3063111fb4503ca25e005ebf6e73780
- web: https://vuldb.com/?id.216521
```",True,"x/vulndb: potential Go vuln in github.com/destinygg/chat: CVE-2020-36625 - CVE-2020-36625 references [github.com/destinygg/chat](https://github.com/destinygg/chat), which may be a Go module.
Description:
** UNSUPPORTED WHEN ASSIGNED ** A vulnerability was found in destiny.gg chat. It has been rated as problematic. This issue affects the function websocket.Upgrader of the file main.go. The manipulation leads to cross-site request forgery. The attack may be initiated remotely. The name of the patch is bebd256fc3063111fb4503ca25e005ebf6e73780. It is recommended to apply a patch to fix this issue. The identifier VDB-216521 was assigned to this vulnerability. NOTE: This vulnerability only affects products that are no longer supported by the maintainer.
References:
- NIST: https://nvd.nist.gov/vuln/detail/CVE-2020-36625
- JSON: https://github.com/CVEProject/cvelist/tree/746b542db7536ae79b7aa4e51d1d9965c12d786f/2020/36xxx/CVE-2020-36625.json
- fix: https://github.com/destinygg/chat/pull/35
- fix: https://github.com/destinygg/chat/commit/bebd256fc3063111fb4503ca25e005ebf6e73780
- web: https://vuldb.com/?id.216521
- Imported by: https://pkg.go.dev/github.com/destinygg/chat?tab=importedby
Cross references:
No existing reports found with this module or alias.
See [doc/triage.md](https://github.com/golang/vulndb/blob/master/doc/triage.md) for instructions on how to triage this report.
```
modules:
- module: github.com/destinygg/chat
packages:
- package: chat
description: |
** UNSUPPORTED WHEN ASSIGNED ** A vulnerability was found in destiny.gg chat. It has been rated as problematic. This issue affects the function websocket.Upgrader of the file main.go. The manipulation leads to cross-site request forgery. The attack may be initiated remotely. The name of the patch is bebd256fc3063111fb4503ca25e005ebf6e73780. It is recommended to apply a patch to fix this issue. The identifier VDB-216521 was assigned to this vulnerability. NOTE: This vulnerability only affects products that are no longer supported by the maintainer.
cves:
- CVE-2020-36625
references:
- fix: https://github.com/destinygg/chat/pull/35
- fix: https://github.com/destinygg/chat/commit/bebd256fc3063111fb4503ca25e005ebf6e73780
- web: https://vuldb.com/?id.216521
```",1,x vulndb potential go vuln in github com destinygg chat cve cve references which may be a go module description unsupported when assigned a vulnerability was found in destiny gg chat it has been rated as problematic this issue affects the function websocket upgrader of the file main go the manipulation leads to cross site request forgery the attack may be initiated remotely the name of the patch is it is recommended to apply a patch to fix this issue the identifier vdb was assigned to this vulnerability note this vulnerability only affects products that are no longer supported by the maintainer references nist json fix fix web imported by cross references no existing reports found with this module or alias see for instructions on how to triage this report modules module github com destinygg chat packages package chat description unsupported when assigned a vulnerability was found in destiny gg chat it has been rated as problematic this issue affects the function websocket upgrader of the file main go the manipulation leads to cross site request forgery the attack may be initiated remotely the name of the patch is it is recommended to apply a patch to fix this issue the identifier vdb was assigned to this vulnerability note this vulnerability only affects products that are no longer supported by the maintainer cves cve references fix fix web ,1
1246,16618258380.0,IssuesEvent,2021-06-02 19:48:50,Azure/azure-functions-host,https://api.github.com/repos/Azure/azure-functions-host,closed,Include extension config when logging host.json,Supportability,"For diagnostics to detect common errors such as using a newly introduced host.json setting with an older extension that does not support that setting, we need the extensions section of host.json logged. The options logger for the extension won't log unrecognized settings, so the 'Host configuration file read' log entry is the best way to find those unrecognized settings.",True,"Include extension config when logging host.json - For diagnostics to detect common errors such as using a newly introduced host.json setting with an older extension that does not support that setting, we need the extensions section of host.json logged. The options logger for the extension won't log unrecognized settings, so the 'Host configuration file read' log entry is the best way to find those unrecognized settings.",1,include extension config when logging host json for diagnostics to detect common errors such as using a newly introduced host json setting with an older extension that does not support that setting we need the extensions section of host json logged the options logger for the extension won t log unrecognized settings so the host configuration file read log entry is the best way to find those unrecognized settings ,1
1218,15833171600.0,IssuesEvent,2021-04-06 15:21:47,openwall/john,https://api.github.com/repos/openwall/john,closed,"Reduce, then remove dependency on OpenSSL",enhancement portability,"We should reduce the number of formats that use OpenSSL by unconditionally(?) switching them to our own crypto primitives where we have those. Then add a way to build without OpenSSL, by disabling the few remaining OpenSSL-dependent formats in such builds. Maybe add compile-time warnings about that. If those formats are numerous or/and deemed important, then we should fail a build by default when there's no OpenSSL, and suggest/require explicit `--without-openssl` or `--disable-openssl` for the build to succeed anyway.
I think not using OpenSSL's crypto primitives unconditionally might have slight performance impact on uses of scalar SHA-256 and SHA-512, because OpenSSL's code for those has SIMD versions of the message scheduling step, whereas our scalar SHA-256 and SHA-512 code is in fact purely scalar. We might want to measure the impact of this vs. the savings of having avoided OpenSSL's context zeroization on `SHA*`_Final()`.
Assigning to @magnumripper at least for the autoconf changes involved in this. I will probably help with changes related to #3917 (if any).",True,"Reduce, then remove dependency on OpenSSL - We should reduce the number of formats that use OpenSSL by unconditionally(?) switching them to our own crypto primitives where we have those. Then add a way to build without OpenSSL, by disabling the few remaining OpenSSL-dependent formats in such builds. Maybe add compile-time warnings about that. If those formats are numerous or/and deemed important, then we should fail a build by default when there's no OpenSSL, and suggest/require explicit `--without-openssl` or `--disable-openssl` for the build to succeed anyway.
I think not using OpenSSL's crypto primitives unconditionally might have slight performance impact on uses of scalar SHA-256 and SHA-512, because OpenSSL's code for those has SIMD versions of the message scheduling step, whereas our scalar SHA-256 and SHA-512 code is in fact purely scalar. We might want to measure the impact of this vs. the savings of having avoided OpenSSL's context zeroization on `SHA*`_Final()`.
Assigning to @magnumripper at least for the autoconf changes involved in this. I will probably help with changes related to #3917 (if any).",1,reduce then remove dependency on openssl we should reduce the number of formats that use openssl by unconditionally switching them to our own crypto primitives where we have those then add a way to build without openssl by disabling the few remaining openssl dependent formats in such builds maybe add compile time warnings about that if those formats are numerous or and deemed important then we should fail a build by default when there s no openssl and suggest require explicit without openssl or disable openssl for the build to succeed anyway i think not using openssl s crypto primitives unconditionally might have slight performance impact on uses of scalar sha and sha because openssl s code for those has simd versions of the message scheduling step whereas our scalar sha and sha code is in fact purely scalar we might want to measure the impact of this vs the savings of having avoided openssl s context zeroization on sha final assigning to magnumripper at least for the autoconf changes involved in this i will probably help with changes related to if any ,1
1931,30299718655.0,IssuesEvent,2023-07-10 04:20:24,jqlang/jq,https://api.github.com/repos/jqlang/jq,closed,Require binary for ARM64 architecture,release/packaging portability,"This is a feature request and not a bug report.
I was trying to build **zarplata/concourse-git-bitbucket-pr-resource** image on arm64 platform but it requires a jq binary which is not available for the arm64 platform.
I have built the binary successfully by following below steps:
- Cloned the package and ran git submodule update --init to get oniguruma and also installed libtool, autoconf make.
- Ran autoreconf -fi and then ./configure --with-oniguruma=builtin.
- Ran make -j8 and make check.
Do you have any plans to release binary for arm64?
It will be very helpful if the binary is released for arm64.",True,"Require binary for ARM64 architecture - This is a feature request and not a bug report.
I was trying to build **zarplata/concourse-git-bitbucket-pr-resource** image on arm64 platform but it requires a jq binary which is not available for the arm64 platform.
I have built the binary successfully by following below steps:
- Cloned the package and ran git submodule update --init to get oniguruma and also installed libtool, autoconf make.
- Ran autoreconf -fi and then ./configure --with-oniguruma=builtin.
- Ran make -j8 and make check.
Do you have any plans to release binary for arm64?
It will be very helpful if the binary is released for arm64.",1,require binary for architecture this is a feature request and not a bug report i was trying to build zarplata concourse git bitbucket pr resource image on platform but it requires a jq binary which is not available for the platform i have built the binary successfully by following below steps cloned the package and ran git submodule update init to get oniguruma and also installed libtool autoconf make ran autoreconf fi and then configure with oniguruma builtin ran make and make check do you have any plans to release binary for it will be very helpful if the binary is released for ,1
248259,7928598374.0,IssuesEvent,2018-07-06 12:20:44,centreon/centreon,https://api.github.com/repos/centreon/centreon,closed,[2.8.4] restore broker configuration with clapi generate too much output and input,area/api area/broker area/configuration kind/bug priority/minor,"
---------------------------------------------------
BUG REPORT INFORMATION
---------------------------------------------------
**Centreon Web version**: 2.8.4
**Centreon Engine version**: 1.7.0
**Centreon Broker version**: 3.0.3
**OS**: ISO Centreon 3.4 (CentOS)
**Additional environment details (AWS, VirtualBox, physical, etc.):**
**Steps to reproduce the issue:**
1. Save config with Clapi
2. Delete configuration engine, broker, poller
3. Restore config with backup
**Describe the results your results **
The central-broker-master configuration have too much input and ouput. see screenshots



generate a wrong config
> ?xml version=""1.0"" encoding=""UTF-8""?
centreonBroker
broker_id ![CDATA[5]] /broker_id
broker_name ![CDATA[central-broker-master]] /broker_name
poller_id ![CDATA[2]] /poller_id
poller_name ![CDATA[Central]] /poller_name
module_directory ![CDATA[/usr/share/centreon/lib/centreon-broker]] /module_directory
log_timestamp ![CDATA[1]] /log_timestamp
log_thread_id ![CDATA[1]] /log_thread_id
event_queue_max_size ![CDATA[50000]] /event_queue_max_size
command_file ![CDATA[]] /command_file
input
type ![CDATA[ipv4]] /type
/input
logger
type ![CDATA[file]] /type
/logger
output
type ![CDATA[sql]] /type
/output
input
name ![CDATA[central-broker-master-input]] /name
/input
logger
name ![CDATA[/var/log/centreon-broker/central-broker-master.log]] /name
/logger
output
name ![CDATA[central-broker-master-sql]] /name
retry_interval ![CDATA[60]] /retry_interval
buffering_timeout ![CDATA[0]] /buffering_timeout
type ![CDATA[ipv4]] /type
failover ![CDATA[central-broker-master-sql-output-failover]] /failover
/output
output
name ![CDATA[centreon-broker-master-rrd]] /name
retry_interval ![CDATA[60]] /retry_interval
buffering_timeout ![CDATA[0]] /buffering_timeout
type ![CDATA[storage]] /type
failover ![CDATA[centreon-broker-master-rrd-output-failover]] /failover
/output
output
name ![CDATA[central-broker-master-perfdata]] /name
db_host ![CDATA[172.16.209.106]] /db_host
db_port ![CDATA[2003]] /db_port
db_password ![CDATA[]] /db_password
queries_per_transaction ![CDATA[1000]] /queries_per_transaction
type ![CDATA[graphite]] /type
failover ![CDATA[central-broker-master-perfdata-output-failover]] /failover
/output
output
name ![CDATA[graphite]] /name
/output
output
type ![CDATA[file]] /type
name ![CDATA[central-broker-master-sql-output-failover]] /name
path ![CDATA[/var/lib/centreon-broker/central-broker-master_central-broker-master-sql.retention]] /path
protocol ![CDATA[bbdo]] /protocol
compression ![CDATA[auto]] /compression
max_size ![CDATA[524288000]] /max_size
/output
output
type ![CDATA[file]] /type
name ![CDATA[centreon-broker-master-rrd-output-failover]] /name
path ![CDATA[/var/lib/centreon-broker/central-broker-master_centreon-broker-master-rrd.retention]] /path
protocol ![CDATA[bbdo]] /protocol
compression ![CDATA[auto]] /compression
max_size ![CDATA[524288000]] /max_size
/output
output
type ![CDATA[file]] /type
name ![CDATA[central-broker-master-perfdata-output-failover]] /name
path ![CDATA[/var/lib/centreon-broker/central-broker-master_central-broker-master-perfdata.retention]] /path
protocol ![CDATA[bbdo]] /protocol
compression ![CDATA[auto]] /compression
max_size ![CDATA[524288000]] /max_size
/output
temporary
type ![CDATA[file]] /type
name ![CDATA[central-broker-master-temporary]] /name
path ![CDATA[/var/lib/centreon-broker/central-broker-master.temporary]] /path
protocol ![CDATA[bbdo]] /protocol
compression ![CDATA[auto]] /compression
max_size ![CDATA[524288000]] /max_size
/temporary
stats
type ![CDATA[stats]] /type
name ![CDATA[central-broker-master-stats]] /name
json_fifo ![CDATA[/var/lib/centreon-broker/central-broker-master-stats.json]] /json_fifo
/stats
/centreonBroker
**Describe the results you expected:**
**Additional information you think important (e.g. issue happens only occasionally):**
Here is the problem
Output and Input export starts at zero instead of 1
example in centreon-web 2.8.4:
CENTBROKERCFG;ADDOUTPUT;central-broker-master;central-broker-master-sql;sql
CENTBROKERCFG;SETOUTPUT;central-broker-master;0;db_type;mysql
CENTBROKERCFG;SETOUTPUT;central-broker-master;0;failover;
CENTBROKERCFG;SETOUTPUT;central-broker-master;0;retry_interval;60
CENTBROKERCFG;SETOUTPUT;central-broker-master;0;buffering_timeout;0
CENTBROKERCFG;SETOUTPUT;central-broker-master;0;db_host;localhost
CENTBROKERCFG;SETOUTPUT;central-broker-master;0;db_port;3306
CENTBROKERCFG;SETOUTPUT;central-broker-master;0;db_user;centreon
CENTBROKERCFG;SETOUTPUT;central-broker-master;0;db_password;password
CENTBROKERCFG;SETOUTPUT;central-broker-master;0;db_name;centreon_storage
CENTBROKERCFG;SETOUTPUT;central-broker-master;0;queries_per_transaction;5000
CENTBROKERCFG;SETOUTPUT;central-broker-master;0;read_timeout;5
CENTBROKERCFG;SETOUTPUT;central-broker-master;0;check_replication;no
CENTBROKERCFG;SETOUTPUT;central-broker-master;0;cleanup_check_interval;
CENTBROKERCFG;SETOUTPUT;central-broker-master;0;instance_timeout;
CENTBROKERCFG;SETOUTPUT;central-broker-master;0;type;sql
CENTBROKERCFG;ADDOUTPUT;central-broker-master;central-broker-master-perfdata;storage
CENTBROKERCFG;SETOUTPUT;central-broker-master;1;interval;60
CENTBROKERCFG;SETOUTPUT;central-broker-master;1;retry_interval;60
CENTBROKERCFG;SETOUTPUT;central-broker-master;1;buffering_timeout;0
CENTBROKERCFG;SETOUTPUT;central-broker-master;1;failover;
CENTBROKERCFG;SETOUTPUT;central-broker-master;1;length;15552000
CENTBROKERCFG;SETOUTPUT;central-broker-master;1;db_type;mysql
CENTBROKERCFG;SETOUTPUT;central-broker-master;1;db_host;localhost
CENTBROKERCFG;SETOUTPUT;central-broker-master;1;db_port;3306
CENTBROKERCFG;SETOUTPUT;central-broker-master;1;db_user;centreon
CENTBROKERCFG;SETOUTPUT;central-broker-master;1;db_password;password
CENTBROKERCFG;SETOUTPUT;central-broker-master;1;db_name;centreon_storage
CENTBROKERCFG;SETOUTPUT;central-broker-master;1;queries_per_transaction;5000
CENTBROKERCFG;SETOUTPUT;central-broker-master;1;read_timeout;5
CENTBROKERCFG;SETOUTPUT;central-broker-master;1;check_replication;no
CENTBROKERCFG;SETOUTPUT;central-broker-master;1;rebuild_check_interval;
CENTBROKERCFG;SETOUTPUT;central-broker-master;1;store_in_data_bin;yes
CENTBROKERCFG;SETOUTPUT;central-broker-master;1;insert_in_index_data;1
CENTBROKERCFG;SETOUTPUT;central-broker-master;1;type;storage
CENTBROKERCFG;ADDOUTPUT;central-broker-master;central-broker-master-rrd;ipv4
CENTBROKERCFG;SETOUTPUT;central-broker-master;2;port;5670
CENTBROKERCFG;SETOUTPUT;central-broker-master;2;host;127.0.0.1
CENTBROKERCFG;SETOUTPUT;central-broker-master;2;failover;
CENTBROKERCFG;SETOUTPUT;central-broker-master;2;retry_interval;60
CENTBROKERCFG;SETOUTPUT;central-broker-master;2;buffering_timeout;0
CENTBROKERCFG;SETOUTPUT;central-broker-master;2;protocol;bbdo
CENTBROKERCFG;SETOUTPUT;central-broker-master;2;tls;no
CENTBROKERCFG;SETOUTPUT;central-broker-master;2;private_key;
CENTBROKERCFG;SETOUTPUT;central-broker-master;2;public_cert;
CENTBROKERCFG;SETOUTPUT;central-broker-master;2;ca_certificate;
CENTBROKERCFG;SETOUTPUT;central-broker-master;2;negotiation;yes
CENTBROKERCFG;SETOUTPUT;central-broker-master;2;one_peer_retention_mode;no
CENTBROKERCFG;SETOUTPUT;central-broker-master;2;compression;auto
CENTBROKERCFG;SETOUTPUT;central-broker-master;2;compression_level;
CENTBROKERCFG;SETOUTPUT;central-broker-master;2;compression_buffer;
CENTBROKERCFG;SETOUTPUT;central-broker-master;2;type;ipv4
Exemple in centreon-web 2.8.3:
CENTBROKERCFG;ADDINPUT;central-broker-master;central-broker-master-input;ipv4
CENTBROKERCFG;SETINPUT;central-broker-master;1;port;5669
CENTBROKERCFG;SETINPUT;central-broker-master;1;buffering_timeout;0
CENTBROKERCFG;SETINPUT;central-broker-master;1;host;
CENTBROKERCFG;SETINPUT;central-broker-master;1;failover;
CENTBROKERCFG;SETINPUT;central-broker-master;1;retry_interval;60
CENTBROKERCFG;SETINPUT;central-broker-master;1;protocol;bbdo
CENTBROKERCFG;SETINPUT;central-broker-master;1;tls;auto
CENTBROKERCFG;SETINPUT;central-broker-master;1;private_key;
CENTBROKERCFG;SETINPUT;central-broker-master;1;public_cert;
CENTBROKERCFG;SETINPUT;central-broker-master;1;ca_certificate;
CENTBROKERCFG;SETINPUT;central-broker-master;1;negociation;yes
CENTBROKERCFG;SETINPUT;central-broker-master;1;compression;auto
CENTBROKERCFG;SETINPUT;central-broker-master;1;compression_level;
CENTBROKERCFG;SETINPUT;central-broker-master;1;compression_buffer;
CENTBROKERCFG;SETINPUT;central-broker-master;1;type;ipv4
CENTBROKERCFG;ADDLOGGER;central-broker-master;/var/log/centreon-broker/central-broker-master.log;file
CENTBROKERCFG;SETLOGGER;central-broker-master;1;config;yes
CENTBROKERCFG;SETLOGGER;central-broker-master;1;debug;no
CENTBROKERCFG;SETLOGGER;central-broker-master;1;error;yes
CENTBROKERCFG;SETLOGGER;central-broker-master;1;info;no
CENTBROKERCFG;SETLOGGER;central-broker-master;1;level;low
CENTBROKERCFG;SETLOGGER;central-broker-master;1;max_size;
CENTBROKERCFG;SETLOGGER;central-broker-master;1;type;file
CENTBROKERCFG;ADDOUTPUT;central-broker-master;central-broker-master-sql;sql
CENTBROKERCFG;SETOUTPUT;central-broker-master;1;db_type;mysql
CENTBROKERCFG;SETOUTPUT;central-broker-master;1;retry_interval;60
CENTBROKERCFG;SETOUTPUT;central-broker-master;1;buffering_timeout;0
CENTBROKERCFG;SETOUTPUT;central-broker-master;1;failover;
CENTBROKERCFG;SETOUTPUT;central-broker-master;1;db_host;localhost
CENTBROKERCFG;SETOUTPUT;central-broker-master;1;db_port;3306
CENTBROKERCFG;SETOUTPUT;central-broker-master;1;db_user;centreon
CENTBROKERCFG;SETOUTPUT;central-broker-master;1;db_password;password
CENTBROKERCFG;SETOUTPUT;central-broker-master;1;db_name;centreon_storage
CENTBROKERCFG;SETOUTPUT;central-broker-master;1;queries_per_transaction;
CENTBROKERCFG;SETOUTPUT;central-broker-master;1;read_timeout;
CENTBROKERCFG;SETOUTPUT;central-broker-master;1;type;sql
CENTBROKERCFG;ADDOUTPUT;central-broker-master;centreon-broker-master-rrd;ipv4
CENTBROKERCFG;SETOUTPUT;central-broker-master;2;port;5670
CENTBROKERCFG;SETOUTPUT;central-broker-master;2;buffering_timeout;0
CENTBROKERCFG;SETOUTPUT;central-broker-master;2;host;localhost
CENTBROKERCFG;SETOUTPUT;central-broker-master;2;failover;
CENTBROKERCFG;SETOUTPUT;central-broker-master;2;retry_interval;60
CENTBROKERCFG;SETOUTPUT;central-broker-master;2;protocol;bbdo
CENTBROKERCFG;SETOUTPUT;central-broker-master;2;tls;no
CENTBROKERCFG;SETOUTPUT;central-broker-master;2;private_key;
CENTBROKERCFG;SETOUTPUT;central-broker-master;2;public_cert;
CENTBROKERCFG;SETOUTPUT;central-broker-master;2;ca_certificate;
CENTBROKERCFG;SETOUTPUT;central-broker-master;2;negociation;yes
CENTBROKERCFG;SETOUTPUT;central-broker-master;2;compression;no
CENTBROKERCFG;SETOUTPUT;central-broker-master;2;compression_level;
CENTBROKERCFG;SETOUTPUT;central-broker-master;2;compression_buffer;
CENTBROKERCFG;SETOUTPUT;central-broker-master;2;type;ipv4
CENTBROKERCFG;ADDOUTPUT;central-broker-master;central-broker-master-perfdata;storage
CENTBROKERCFG;SETOUTPUT;central-broker-master;3;interval;60
CENTBROKERCFG;SETOUTPUT;central-broker-master;3;failover;
CENTBROKERCFG;SETOUTPUT;central-broker-master;3;retry_interval;60
CENTBROKERCFG;SETOUTPUT;central-broker-master;3;buffering_timeout;0
CENTBROKERCFG;SETOUTPUT;central-broker-master;3;length;15552000
CENTBROKERCFG;SETOUTPUT;central-broker-master;3;db_type;mysql
CENTBROKERCFG;SETOUTPUT;central-broker-master;3;db_host;localhost
CENTBROKERCFG;SETOUTPUT;central-broker-master;3;db_port;3306
CENTBROKERCFG;SETOUTPUT;central-broker-master;3;db_user;centreon
CENTBROKERCFG;SETOUTPUT;central-broker-master;3;db_password;password
CENTBROKERCFG;SETOUTPUT;central-broker-master;3;db_name;centreon_storage
CENTBROKERCFG;SETOUTPUT;central-broker-master;3;queries_per_transaction;
CENTBROKERCFG;SETOUTPUT;central-broker-master;3;read_timeout;
CENTBROKERCFG;SETOUTPUT;central-broker-master;3;check_replication;no
CENTBROKERCFG;SETOUTPUT;central-broker-master;3;rebuild_check_interval;
CENTBROKERCFG;SETOUTPUT;central-broker-master;3;type;storage
",1.0,"[2.8.4] restore broker configuration with clapi generate too much output and input -
---------------------------------------------------
BUG REPORT INFORMATION
---------------------------------------------------
**Centreon Web version**: 2.8.4
**Centreon Engine version**: 1.7.0
**Centreon Broker version**: 3.0.3
**OS**: ISO Centreon 3.4 (CentOS)
**Additional environment details (AWS, VirtualBox, physical, etc.):**
**Steps to reproduce the issue:**
1. Save config with Clapi
2. Delete configuration engine, broker, poller
3. Restore config with backup
**Describe the results your results **
The central-broker-master configuration have too much input and ouput. see screenshots



generate a wrong config
> ?xml version=""1.0"" encoding=""UTF-8""?
centreonBroker
broker_id ![CDATA[5]] /broker_id
broker_name ![CDATA[central-broker-master]] /broker_name
poller_id ![CDATA[2]] /poller_id
poller_name ![CDATA[Central]] /poller_name
module_directory ![CDATA[/usr/share/centreon/lib/centreon-broker]] /module_directory
log_timestamp ![CDATA[1]] /log_timestamp
log_thread_id ![CDATA[1]] /log_thread_id
event_queue_max_size ![CDATA[50000]] /event_queue_max_size
command_file ![CDATA[]] /command_file
input
type ![CDATA[ipv4]] /type
/input
logger
type ![CDATA[file]] /type
/logger
output
type ![CDATA[sql]] /type
/output
input
name ![CDATA[central-broker-master-input]] /name
/input
logger
name ![CDATA[/var/log/centreon-broker/central-broker-master.log]] /name
/logger
output
name ![CDATA[central-broker-master-sql]] /name
retry_interval ![CDATA[60]] /retry_interval
buffering_timeout ![CDATA[0]] /buffering_timeout
type ![CDATA[ipv4]] /type
failover ![CDATA[central-broker-master-sql-output-failover]] /failover
/output
output
name ![CDATA[centreon-broker-master-rrd]] /name
retry_interval ![CDATA[60]] /retry_interval
buffering_timeout ![CDATA[0]] /buffering_timeout
type ![CDATA[storage]] /type
failover ![CDATA[centreon-broker-master-rrd-output-failover]] /failover
/output
output
name ![CDATA[central-broker-master-perfdata]] /name
db_host ![CDATA[172.16.209.106]] /db_host
db_port ![CDATA[2003]] /db_port
db_password ![CDATA[]] /db_password
queries_per_transaction ![CDATA[1000]] /queries_per_transaction
type ![CDATA[graphite]] /type
failover ![CDATA[central-broker-master-perfdata-output-failover]] /failover
/output
output
name ![CDATA[graphite]] /name
/output
output
type ![CDATA[file]] /type
name ![CDATA[central-broker-master-sql-output-failover]] /name
path ![CDATA[/var/lib/centreon-broker/central-broker-master_central-broker-master-sql.retention]] /path
protocol ![CDATA[bbdo]] /protocol
compression ![CDATA[auto]] /compression
max_size ![CDATA[524288000]] /max_size
/output
output
type ![CDATA[file]] /type
name ![CDATA[centreon-broker-master-rrd-output-failover]] /name
path ![CDATA[/var/lib/centreon-broker/central-broker-master_centreon-broker-master-rrd.retention]] /path
protocol ![CDATA[bbdo]] /protocol
compression ![CDATA[auto]] /compression
max_size ![CDATA[524288000]] /max_size
/output
output
type ![CDATA[file]] /type
name ![CDATA[central-broker-master-perfdata-output-failover]] /name
path ![CDATA[/var/lib/centreon-broker/central-broker-master_central-broker-master-perfdata.retention]] /path
protocol ![CDATA[bbdo]] /protocol
compression ![CDATA[auto]] /compression
max_size ![CDATA[524288000]] /max_size
/output
temporary
type ![CDATA[file]] /type
name ![CDATA[central-broker-master-temporary]] /name
path ![CDATA[/var/lib/centreon-broker/central-broker-master.temporary]] /path
protocol ![CDATA[bbdo]] /protocol
compression ![CDATA[auto]] /compression
max_size ![CDATA[524288000]] /max_size
/temporary
stats
type ![CDATA[stats]] /type
name ![CDATA[central-broker-master-stats]] /name
json_fifo ![CDATA[/var/lib/centreon-broker/central-broker-master-stats.json]] /json_fifo
/stats
/centreonBroker
**Describe the results you expected:**
**Additional information you think important (e.g. issue happens only occasionally):**
Here is the problem
Output and Input export starts at zero instead of 1
example in centreon-web 2.8.4:
CENTBROKERCFG;ADDOUTPUT;central-broker-master;central-broker-master-sql;sql
CENTBROKERCFG;SETOUTPUT;central-broker-master;0;db_type;mysql
CENTBROKERCFG;SETOUTPUT;central-broker-master;0;failover;
CENTBROKERCFG;SETOUTPUT;central-broker-master;0;retry_interval;60
CENTBROKERCFG;SETOUTPUT;central-broker-master;0;buffering_timeout;0
CENTBROKERCFG;SETOUTPUT;central-broker-master;0;db_host;localhost
CENTBROKERCFG;SETOUTPUT;central-broker-master;0;db_port;3306
CENTBROKERCFG;SETOUTPUT;central-broker-master;0;db_user;centreon
CENTBROKERCFG;SETOUTPUT;central-broker-master;0;db_password;password
CENTBROKERCFG;SETOUTPUT;central-broker-master;0;db_name;centreon_storage
CENTBROKERCFG;SETOUTPUT;central-broker-master;0;queries_per_transaction;5000
CENTBROKERCFG;SETOUTPUT;central-broker-master;0;read_timeout;5
CENTBROKERCFG;SETOUTPUT;central-broker-master;0;check_replication;no
CENTBROKERCFG;SETOUTPUT;central-broker-master;0;cleanup_check_interval;
CENTBROKERCFG;SETOUTPUT;central-broker-master;0;instance_timeout;
CENTBROKERCFG;SETOUTPUT;central-broker-master;0;type;sql
CENTBROKERCFG;ADDOUTPUT;central-broker-master;central-broker-master-perfdata;storage
CENTBROKERCFG;SETOUTPUT;central-broker-master;1;interval;60
CENTBROKERCFG;SETOUTPUT;central-broker-master;1;retry_interval;60
CENTBROKERCFG;SETOUTPUT;central-broker-master;1;buffering_timeout;0
CENTBROKERCFG;SETOUTPUT;central-broker-master;1;failover;
CENTBROKERCFG;SETOUTPUT;central-broker-master;1;length;15552000
CENTBROKERCFG;SETOUTPUT;central-broker-master;1;db_type;mysql
CENTBROKERCFG;SETOUTPUT;central-broker-master;1;db_host;localhost
CENTBROKERCFG;SETOUTPUT;central-broker-master;1;db_port;3306
CENTBROKERCFG;SETOUTPUT;central-broker-master;1;db_user;centreon
CENTBROKERCFG;SETOUTPUT;central-broker-master;1;db_password;password
CENTBROKERCFG;SETOUTPUT;central-broker-master;1;db_name;centreon_storage
CENTBROKERCFG;SETOUTPUT;central-broker-master;1;queries_per_transaction;5000
CENTBROKERCFG;SETOUTPUT;central-broker-master;1;read_timeout;5
CENTBROKERCFG;SETOUTPUT;central-broker-master;1;check_replication;no
CENTBROKERCFG;SETOUTPUT;central-broker-master;1;rebuild_check_interval;
CENTBROKERCFG;SETOUTPUT;central-broker-master;1;store_in_data_bin;yes
CENTBROKERCFG;SETOUTPUT;central-broker-master;1;insert_in_index_data;1
CENTBROKERCFG;SETOUTPUT;central-broker-master;1;type;storage
CENTBROKERCFG;ADDOUTPUT;central-broker-master;central-broker-master-rrd;ipv4
CENTBROKERCFG;SETOUTPUT;central-broker-master;2;port;5670
CENTBROKERCFG;SETOUTPUT;central-broker-master;2;host;127.0.0.1
CENTBROKERCFG;SETOUTPUT;central-broker-master;2;failover;
CENTBROKERCFG;SETOUTPUT;central-broker-master;2;retry_interval;60
CENTBROKERCFG;SETOUTPUT;central-broker-master;2;buffering_timeout;0
CENTBROKERCFG;SETOUTPUT;central-broker-master;2;protocol;bbdo
CENTBROKERCFG;SETOUTPUT;central-broker-master;2;tls;no
CENTBROKERCFG;SETOUTPUT;central-broker-master;2;private_key;
CENTBROKERCFG;SETOUTPUT;central-broker-master;2;public_cert;
CENTBROKERCFG;SETOUTPUT;central-broker-master;2;ca_certificate;
CENTBROKERCFG;SETOUTPUT;central-broker-master;2;negotiation;yes
CENTBROKERCFG;SETOUTPUT;central-broker-master;2;one_peer_retention_mode;no
CENTBROKERCFG;SETOUTPUT;central-broker-master;2;compression;auto
CENTBROKERCFG;SETOUTPUT;central-broker-master;2;compression_level;
CENTBROKERCFG;SETOUTPUT;central-broker-master;2;compression_buffer;
CENTBROKERCFG;SETOUTPUT;central-broker-master;2;type;ipv4
Exemple in centreon-web 2.8.3:
CENTBROKERCFG;ADDINPUT;central-broker-master;central-broker-master-input;ipv4
CENTBROKERCFG;SETINPUT;central-broker-master;1;port;5669
CENTBROKERCFG;SETINPUT;central-broker-master;1;buffering_timeout;0
CENTBROKERCFG;SETINPUT;central-broker-master;1;host;
CENTBROKERCFG;SETINPUT;central-broker-master;1;failover;
CENTBROKERCFG;SETINPUT;central-broker-master;1;retry_interval;60
CENTBROKERCFG;SETINPUT;central-broker-master;1;protocol;bbdo
CENTBROKERCFG;SETINPUT;central-broker-master;1;tls;auto
CENTBROKERCFG;SETINPUT;central-broker-master;1;private_key;
CENTBROKERCFG;SETINPUT;central-broker-master;1;public_cert;
CENTBROKERCFG;SETINPUT;central-broker-master;1;ca_certificate;
CENTBROKERCFG;SETINPUT;central-broker-master;1;negociation;yes
CENTBROKERCFG;SETINPUT;central-broker-master;1;compression;auto
CENTBROKERCFG;SETINPUT;central-broker-master;1;compression_level;
CENTBROKERCFG;SETINPUT;central-broker-master;1;compression_buffer;
CENTBROKERCFG;SETINPUT;central-broker-master;1;type;ipv4
CENTBROKERCFG;ADDLOGGER;central-broker-master;/var/log/centreon-broker/central-broker-master.log;file
CENTBROKERCFG;SETLOGGER;central-broker-master;1;config;yes
CENTBROKERCFG;SETLOGGER;central-broker-master;1;debug;no
CENTBROKERCFG;SETLOGGER;central-broker-master;1;error;yes
CENTBROKERCFG;SETLOGGER;central-broker-master;1;info;no
CENTBROKERCFG;SETLOGGER;central-broker-master;1;level;low
CENTBROKERCFG;SETLOGGER;central-broker-master;1;max_size;
CENTBROKERCFG;SETLOGGER;central-broker-master;1;type;file
CENTBROKERCFG;ADDOUTPUT;central-broker-master;central-broker-master-sql;sql
CENTBROKERCFG;SETOUTPUT;central-broker-master;1;db_type;mysql
CENTBROKERCFG;SETOUTPUT;central-broker-master;1;retry_interval;60
CENTBROKERCFG;SETOUTPUT;central-broker-master;1;buffering_timeout;0
CENTBROKERCFG;SETOUTPUT;central-broker-master;1;failover;
CENTBROKERCFG;SETOUTPUT;central-broker-master;1;db_host;localhost
CENTBROKERCFG;SETOUTPUT;central-broker-master;1;db_port;3306
CENTBROKERCFG;SETOUTPUT;central-broker-master;1;db_user;centreon
CENTBROKERCFG;SETOUTPUT;central-broker-master;1;db_password;password
CENTBROKERCFG;SETOUTPUT;central-broker-master;1;db_name;centreon_storage
CENTBROKERCFG;SETOUTPUT;central-broker-master;1;queries_per_transaction;
CENTBROKERCFG;SETOUTPUT;central-broker-master;1;read_timeout;
CENTBROKERCFG;SETOUTPUT;central-broker-master;1;type;sql
CENTBROKERCFG;ADDOUTPUT;central-broker-master;centreon-broker-master-rrd;ipv4
CENTBROKERCFG;SETOUTPUT;central-broker-master;2;port;5670
CENTBROKERCFG;SETOUTPUT;central-broker-master;2;buffering_timeout;0
CENTBROKERCFG;SETOUTPUT;central-broker-master;2;host;localhost
CENTBROKERCFG;SETOUTPUT;central-broker-master;2;failover;
CENTBROKERCFG;SETOUTPUT;central-broker-master;2;retry_interval;60
CENTBROKERCFG;SETOUTPUT;central-broker-master;2;protocol;bbdo
CENTBROKERCFG;SETOUTPUT;central-broker-master;2;tls;no
CENTBROKERCFG;SETOUTPUT;central-broker-master;2;private_key;
CENTBROKERCFG;SETOUTPUT;central-broker-master;2;public_cert;
CENTBROKERCFG;SETOUTPUT;central-broker-master;2;ca_certificate;
CENTBROKERCFG;SETOUTPUT;central-broker-master;2;negociation;yes
CENTBROKERCFG;SETOUTPUT;central-broker-master;2;compression;no
CENTBROKERCFG;SETOUTPUT;central-broker-master;2;compression_level;
CENTBROKERCFG;SETOUTPUT;central-broker-master;2;compression_buffer;
CENTBROKERCFG;SETOUTPUT;central-broker-master;2;type;ipv4
CENTBROKERCFG;ADDOUTPUT;central-broker-master;central-broker-master-perfdata;storage
CENTBROKERCFG;SETOUTPUT;central-broker-master;3;interval;60
CENTBROKERCFG;SETOUTPUT;central-broker-master;3;failover;
CENTBROKERCFG;SETOUTPUT;central-broker-master;3;retry_interval;60
CENTBROKERCFG;SETOUTPUT;central-broker-master;3;buffering_timeout;0
CENTBROKERCFG;SETOUTPUT;central-broker-master;3;length;15552000
CENTBROKERCFG;SETOUTPUT;central-broker-master;3;db_type;mysql
CENTBROKERCFG;SETOUTPUT;central-broker-master;3;db_host;localhost
CENTBROKERCFG;SETOUTPUT;central-broker-master;3;db_port;3306
CENTBROKERCFG;SETOUTPUT;central-broker-master;3;db_user;centreon
CENTBROKERCFG;SETOUTPUT;central-broker-master;3;db_password;password
CENTBROKERCFG;SETOUTPUT;central-broker-master;3;db_name;centreon_storage
CENTBROKERCFG;SETOUTPUT;central-broker-master;3;queries_per_transaction;
CENTBROKERCFG;SETOUTPUT;central-broker-master;3;read_timeout;
CENTBROKERCFG;SETOUTPUT;central-broker-master;3;check_replication;no
CENTBROKERCFG;SETOUTPUT;central-broker-master;3;rebuild_check_interval;
CENTBROKERCFG;SETOUTPUT;central-broker-master;3;type;storage
",0, restore broker configuration with clapi generate too much output and input if you are reporting a new issue make sure that we do not have any duplicates already open you can ensure this by searching the issue list for this repository if there is a duplicate please close your issue and add a comment to the existing issue instead if you think that your problem is a bug please add a description organized like the bug report information shown below if you can t provide all this information it s possible that we will not be able to debug and fix your problems and so we will be obliged to close the ticket nevertheless you will be able to provide them later in order to reactivate it when we need more information we will reply in order to ask some element in order if you do not answer in the next days the ticket will be automaticaly closed thank you to describe your issue in english bug report information centreon web version centreon engine version centreon broker version os iso centreon centos additional environment details aws virtualbox physical etc steps to reproduce the issue save config with clapi delete configuration engine broker poller restore config with backup describe the results your results the central broker master configuration have too much input and ouput see screenshots generate a wrong config xml version encoding utf centreonbroker broker id broker id broker name broker name poller id poller id poller name poller name module directory module directory log timestamp log timestamp log thread id log thread id event queue max size event queue max size command file command file input type type input logger type type logger output type type output input name name input logger name name logger output name name retry interval retry interval buffering timeout buffering timeout type type failover failover output output name name retry interval retry interval buffering timeout buffering timeout type type failover failover output output name name db host db host db port db port db password db password queries per transaction queries per transaction type type failover failover output output name name output output type type name name path path protocol protocol compression compression max size max size output output type type name name path path protocol protocol compression compression max size max size output output type type name name path path protocol protocol compression compression max size max size output temporary type type name name path path protocol protocol compression compression max size max size temporary stats type type name name json fifo json fifo stats centreonbroker describe the results you expected additional information you think important e g issue happens only occasionally here is the problem output and input export starts at zero instead of example in centreon web centbrokercfg addoutput central broker master central broker master sql sql centbrokercfg setoutput central broker master db type mysql centbrokercfg setoutput central broker master failover centbrokercfg setoutput central broker master retry interval centbrokercfg setoutput central broker master buffering timeout centbrokercfg setoutput central broker master db host localhost centbrokercfg setoutput central broker master db port centbrokercfg setoutput central broker master db user centreon centbrokercfg setoutput central broker master db password password centbrokercfg setoutput central broker master db name centreon storage centbrokercfg setoutput central broker master queries per transaction centbrokercfg setoutput central broker master read timeout centbrokercfg setoutput central broker master check replication no centbrokercfg setoutput central broker master cleanup check interval centbrokercfg setoutput central broker master instance timeout centbrokercfg setoutput central broker master type sql centbrokercfg addoutput central broker master central broker master perfdata storage centbrokercfg setoutput central broker master interval centbrokercfg setoutput central broker master retry interval centbrokercfg setoutput central broker master buffering timeout centbrokercfg setoutput central broker master failover centbrokercfg setoutput central broker master length centbrokercfg setoutput central broker master db type mysql centbrokercfg setoutput central broker master db host localhost centbrokercfg setoutput central broker master db port centbrokercfg setoutput central broker master db user centreon centbrokercfg setoutput central broker master db password password centbrokercfg setoutput central broker master db name centreon storage centbrokercfg setoutput central broker master queries per transaction centbrokercfg setoutput central broker master read timeout centbrokercfg setoutput central broker master check replication no centbrokercfg setoutput central broker master rebuild check interval centbrokercfg setoutput central broker master store in data bin yes centbrokercfg setoutput central broker master insert in index data centbrokercfg setoutput central broker master type storage centbrokercfg addoutput central broker master central broker master rrd centbrokercfg setoutput central broker master port centbrokercfg setoutput central broker master host centbrokercfg setoutput central broker master failover centbrokercfg setoutput central broker master retry interval centbrokercfg setoutput central broker master buffering timeout centbrokercfg setoutput central broker master protocol bbdo centbrokercfg setoutput central broker master tls no centbrokercfg setoutput central broker master private key centbrokercfg setoutput central broker master public cert centbrokercfg setoutput central broker master ca certificate centbrokercfg setoutput central broker master negotiation yes centbrokercfg setoutput central broker master one peer retention mode no centbrokercfg setoutput central broker master compression auto centbrokercfg setoutput central broker master compression level centbrokercfg setoutput central broker master compression buffer centbrokercfg setoutput central broker master type exemple in centreon web centbrokercfg addinput central broker master central broker master input centbrokercfg setinput central broker master port centbrokercfg setinput central broker master buffering timeout centbrokercfg setinput central broker master host centbrokercfg setinput central broker master failover centbrokercfg setinput central broker master retry interval centbrokercfg setinput central broker master protocol bbdo centbrokercfg setinput central broker master tls auto centbrokercfg setinput central broker master private key centbrokercfg setinput central broker master public cert centbrokercfg setinput central broker master ca certificate centbrokercfg setinput central broker master negociation yes centbrokercfg setinput central broker master compression auto centbrokercfg setinput central broker master compression level centbrokercfg setinput central broker master compression buffer centbrokercfg setinput central broker master type centbrokercfg addlogger central broker master var log centreon broker central broker master log file centbrokercfg setlogger central broker master config yes centbrokercfg setlogger central broker master debug no centbrokercfg setlogger central broker master error yes centbrokercfg setlogger central broker master info no centbrokercfg setlogger central broker master level low centbrokercfg setlogger central broker master max size centbrokercfg setlogger central broker master type file centbrokercfg addoutput central broker master central broker master sql sql centbrokercfg setoutput central broker master db type mysql centbrokercfg setoutput central broker master retry interval centbrokercfg setoutput central broker master buffering timeout centbrokercfg setoutput central broker master failover centbrokercfg setoutput central broker master db host localhost centbrokercfg setoutput central broker master db port centbrokercfg setoutput central broker master db user centreon centbrokercfg setoutput central broker master db password password centbrokercfg setoutput central broker master db name centreon storage centbrokercfg setoutput central broker master queries per transaction centbrokercfg setoutput central broker master read timeout centbrokercfg setoutput central broker master type sql centbrokercfg addoutput central broker master centreon broker master rrd centbrokercfg setoutput central broker master port centbrokercfg setoutput central broker master buffering timeout centbrokercfg setoutput central broker master host localhost centbrokercfg setoutput central broker master failover centbrokercfg setoutput central broker master retry interval centbrokercfg setoutput central broker master protocol bbdo centbrokercfg setoutput central broker master tls no centbrokercfg setoutput central broker master private key centbrokercfg setoutput central broker master public cert centbrokercfg setoutput central broker master ca certificate centbrokercfg setoutput central broker master negociation yes centbrokercfg setoutput central broker master compression no centbrokercfg setoutput central broker master compression level centbrokercfg setoutput central broker master compression buffer centbrokercfg setoutput central broker master type centbrokercfg addoutput central broker master central broker master perfdata storage centbrokercfg setoutput central broker master interval centbrokercfg setoutput central broker master failover centbrokercfg setoutput central broker master retry interval centbrokercfg setoutput central broker master buffering timeout centbrokercfg setoutput central broker master length centbrokercfg setoutput central broker master db type mysql centbrokercfg setoutput central broker master db host localhost centbrokercfg setoutput central broker master db port centbrokercfg setoutput central broker master db user centreon centbrokercfg setoutput central broker master db password password centbrokercfg setoutput central broker master db name centreon storage centbrokercfg setoutput central broker master queries per transaction centbrokercfg setoutput central broker master read timeout centbrokercfg setoutput central broker master check replication no centbrokercfg setoutput central broker master rebuild check interval centbrokercfg setoutput central broker master type storage ,0
32267,6756454658.0,IssuesEvent,2017-10-24 07:08:41,primefaces/primeng,https://api.github.com/repos/primefaces/primeng,closed,paginatorPosition not working,confirmed defect,"### There is no guarantee in receiving a response in GitHub Issue Tracker, If you'd like to secure our response, you may consider *PrimeNG PRO Support* where support is provided within 4 business hours
**I'm submitting a ...** (check one with ""x"")
```
[x] bug report => Search github for a similar issue or PR before submitting
[ ] feature request => Please check if request is not on the roadmap already https://github.com/primefaces/primeng/wiki/Roadmap
[ ] support request => Please do not submit support request here, instead see http://forum.primefaces.org/viewforum.php?f=35
```
**Plunkr Case (Bug Reports)**
Please fork the plunkr below and create a case demonstrating your bug report. Issues without a plunkr have much less possibility to be reviewed.
http://plnkr.co/edit/Qi2Nw2imLpDUF8NJinYw?p=preview
**Current behavior**
On p-dataTable component, if use [paginator]=""true"" paginator component shows at bottom of datatable, which is expected. However, if add, [paginatorPosition]=""both"" or [paginatorPosition]=""top"", the paginator component completely disappears even though the data is paginated.
**Expected behavior**
The following code in the html template should show the paginator at both the top and the bottom of the table.
**Minimal reproduction of the problem with instructions**
**What is the motivation / use case for changing the behavior?**
**Please tell us about your environment:**
* **Angular version:** 2.0.X
4.3.1
* **PrimeNG version:** 2.0.X
4.1.2
* **Browser:** [all | Chrome XX | Firefox XX | IE XX | Safari XX | Mobile Chrome XX | Android X.X Web Browser | iOS XX Safari | iOS XX UIWebView | iOS XX WKWebView ]
Chrome (so far)
* **Language:** [all | TypeScript X.X | ES6/7 | ES5]
* **Node (for AoT issues):** `node --version` =
",1.0,"paginatorPosition not working - ### There is no guarantee in receiving a response in GitHub Issue Tracker, If you'd like to secure our response, you may consider *PrimeNG PRO Support* where support is provided within 4 business hours
**I'm submitting a ...** (check one with ""x"")
```
[x] bug report => Search github for a similar issue or PR before submitting
[ ] feature request => Please check if request is not on the roadmap already https://github.com/primefaces/primeng/wiki/Roadmap
[ ] support request => Please do not submit support request here, instead see http://forum.primefaces.org/viewforum.php?f=35
```
**Plunkr Case (Bug Reports)**
Please fork the plunkr below and create a case demonstrating your bug report. Issues without a plunkr have much less possibility to be reviewed.
http://plnkr.co/edit/Qi2Nw2imLpDUF8NJinYw?p=preview
**Current behavior**
On p-dataTable component, if use [paginator]=""true"" paginator component shows at bottom of datatable, which is expected. However, if add, [paginatorPosition]=""both"" or [paginatorPosition]=""top"", the paginator component completely disappears even though the data is paginated.
**Expected behavior**
The following code in the html template should show the paginator at both the top and the bottom of the table.
**Minimal reproduction of the problem with instructions**
**What is the motivation / use case for changing the behavior?**
**Please tell us about your environment:**
* **Angular version:** 2.0.X
4.3.1
* **PrimeNG version:** 2.0.X
4.1.2
* **Browser:** [all | Chrome XX | Firefox XX | IE XX | Safari XX | Mobile Chrome XX | Android X.X Web Browser | iOS XX Safari | iOS XX UIWebView | iOS XX WKWebView ]
Chrome (so far)
* **Language:** [all | TypeScript X.X | ES6/7 | ES5]
* **Node (for AoT issues):** `node --version` =
",0,paginatorposition not working there is no guarantee in receiving a response in github issue tracker if you d like to secure our response you may consider primeng pro support where support is provided within business hours i m submitting a check one with x bug report search github for a similar issue or pr before submitting feature request please check if request is not on the roadmap already support request please do not submit support request here instead see plunkr case bug reports please fork the plunkr below and create a case demonstrating your bug report issues without a plunkr have much less possibility to be reviewed current behavior on p datatable component if use true paginator component shows at bottom of datatable which is expected however if add both or top the paginator component completely disappears even though the data is paginated expected behavior the following code in the html template should show the paginator at both the top and the bottom of the table minimal reproduction of the problem with instructions if the current behavior is a bug or you can illustrate your feature request better with an example please provide the steps to reproduce and if possible a minimal demo of the problem via or similar you can use this template as a starting point what is the motivation use case for changing the behavior please tell us about your environment angular version x primeng version x browser chrome so far language node for aot issues node version ,0
764,10220631055.0,IssuesEvent,2019-08-15 21:58:02,ParRes/Kernels,https://api.github.com/repos/ParRes/Kernels,closed,use better solution for overflow in transpose,portability,"@rfvander In looking more at the code where we handle integer overflows in transpose, I think we did it wrong. Of course, this is my fault, since I was the one who started promoting 32b integers to 64b integers.
The overflow issue emerges only when we multiply two integers, since our square matrices will never be anywhere near 2B by 2B. In Fortran, we will never have any issue indexing with 32b integers, because we only index the dimensions independently. However, in the C code when we do `A[i*order+j]`, we overflow.
I propose that we _go back to 32b integers_ and handle the overflow by explicitly casting inside of multiplied expressions like `A[i*order+j]`, which will become `A[(size_t)i*(size_t)order+(size_t)j]` or by using [C99 VLAs](http://www.drdobbs.com/the-new-cwhy-variable-length-arrays/184401444).
One motivation for this is that modern processors are still better at handling 32b loop indices.
",True,"use better solution for overflow in transpose - @rfvander In looking more at the code where we handle integer overflows in transpose, I think we did it wrong. Of course, this is my fault, since I was the one who started promoting 32b integers to 64b integers.
The overflow issue emerges only when we multiply two integers, since our square matrices will never be anywhere near 2B by 2B. In Fortran, we will never have any issue indexing with 32b integers, because we only index the dimensions independently. However, in the C code when we do `A[i*order+j]`, we overflow.
I propose that we _go back to 32b integers_ and handle the overflow by explicitly casting inside of multiplied expressions like `A[i*order+j]`, which will become `A[(size_t)i*(size_t)order+(size_t)j]` or by using [C99 VLAs](http://www.drdobbs.com/the-new-cwhy-variable-length-arrays/184401444).
One motivation for this is that modern processors are still better at handling 32b loop indices.
",1,use better solution for overflow in transpose rfvander in looking more at the code where we handle integer overflows in transpose i think we did it wrong of course this is my fault since i was the one who started promoting integers to integers the overflow issue emerges only when we multiply two integers since our square matrices will never be anywhere near by in fortran we will never have any issue indexing with integers because we only index the dimensions independently however in the c code when we do a we overflow i propose that we go back to integers and handle the overflow by explicitly casting inside of multiplied expressions like a which will become a or by using one motivation for this is that modern processors are still better at handling loop indices ,1
255,4964896198.0,IssuesEvent,2016-12-04 01:00:01,jemalloc/jemalloc,https://api.github.com/repos/jemalloc/jemalloc,closed,Create specific way to disable syscall calls.,portability,"For security purposes, syscall might be modified to filter out any but a few whitelisted system calls. The 4.3.1 codebase uses syscall if it has been defined, so might trigger some syscalls that we don't want to allow.
It's possible to disable this by undefining the JEMALLOC_HAVE_SYSCALL. However, that's set by the configure script itself, and it's not as clear what will happen if it's not defined. It would be nice if there was a configure option, or define that could be used to specifically disable the use of syscalls.",True,"Create specific way to disable syscall calls. - For security purposes, syscall might be modified to filter out any but a few whitelisted system calls. The 4.3.1 codebase uses syscall if it has been defined, so might trigger some syscalls that we don't want to allow.
It's possible to disable this by undefining the JEMALLOC_HAVE_SYSCALL. However, that's set by the configure script itself, and it's not as clear what will happen if it's not defined. It would be nice if there was a configure option, or define that could be used to specifically disable the use of syscalls.",1,create specific way to disable syscall calls for security purposes syscall might be modified to filter out any but a few whitelisted system calls the codebase uses syscall if it has been defined so might trigger some syscalls that we don t want to allow it s possible to disable this by undefining the jemalloc have syscall however that s set by the configure script itself and it s not as clear what will happen if it s not defined it would be nice if there was a configure option or define that could be used to specifically disable the use of syscalls ,1
123537,10271431298.0,IssuesEvent,2019-08-23 16:06:38,LiskHQ/lisk-sdk,https://api.github.com/repos/LiskHQ/lisk-sdk,closed,Add QA test simulating node takeover,elements/P2P type: quality assurance type: test,"### Expected behavior
Add test for scenario in which an agent spins up multiple nodes and attempts to takeover a legitimate node's connections.
### Actual behavior
There are no QA tests for this scenario. This feature did not exist prior to v2.3.
### Which version(s) does this affect? (Environment, OS, etc...)
2.3
",1.0,"Add QA test simulating node takeover - ### Expected behavior
Add test for scenario in which an agent spins up multiple nodes and attempts to takeover a legitimate node's connections.
### Actual behavior
There are no QA tests for this scenario. This feature did not exist prior to v2.3.
### Which version(s) does this affect? (Environment, OS, etc...)
2.3
",0,add qa test simulating node takeover expected behavior add test for scenario in which an agent spins up multiple nodes and attempts to takeover a legitimate node s connections actual behavior there are no qa tests for this scenario this feature did not exist prior to which version s does this affect environment os etc ,0
198658,14991891619.0,IssuesEvent,2021-01-29 09:04:41,assemblee-virtuelle/semapps,https://api.github.com/repos/assemblee-virtuelle/semapps,opened,Tester l'endpoint webfinger,easy tests,"- Créer un acteur
- Vérifier qu'on peut le récupérer via l'endpoint webfinger",1.0,"Tester l'endpoint webfinger - - Créer un acteur
- Vérifier qu'on peut le récupérer via l'endpoint webfinger",0,tester l endpoint webfinger créer un acteur vérifier qu on peut le récupérer via l endpoint webfinger,0
79,3005768663.0,IssuesEvent,2015-07-27 04:11:21,stedolan/jq,https://api.github.com/repos/stedolan/jq,closed,Setup CI build?,portability,"Dear all,
Not exactly sure if this has been raised before, any chance we could setup CI e.g. Travis to do the testing? I just cloned the code and it failed tests.
Best,
Dong",True,"Setup CI build? - Dear all,
Not exactly sure if this has been raised before, any chance we could setup CI e.g. Travis to do the testing? I just cloned the code and it failed tests.
Best,
Dong",1,setup ci build dear all not exactly sure if this has been raised before any chance we could setup ci e g travis to do the testing i just cloned the code and it failed tests best dong,1
1992,32058092685.0,IssuesEvent,2023-09-24 10:29:10,networkupstools/nut,https://api.github.com/repos/networkupstools/nut,opened,"CI: add support for ""custom"" dependencies in the codebase? Case in question: how to best handle modbus extensions for NUT drivers?",packaging CI modbus portability,"I have a nagging feeling from the olden days, when many autotools projects came with `contrib`, `custom` or similar sources of third-party stuff like zlib, etc. so a single tarball could provide most of the build-required ecosystem in days before packaging... IIRC autotools were largely made because of such use-case, to consistently build such mixed codebases using either system-provided or custom-smuggled code.
Lately there are a couple of efforts relying on `libmodbus` features which are not (yet) in its mainline code and so packages - PR #2063 (for issue #139) new support for USB, and PR #1671 relies on support for ASCII vs. binary ModBus protocols. Neither is so far included into the upstream library project, and given how slow it is lately about even merging PRs or answering to issues/questions, I am not fully sure we should hold our fingers crossed and hopes to merge new drivers depending on such new features.
On the other hand, specifically for the modbus-ascii part of the question, there is some development simmering from a still-open PR https://github.com/stephane/libmodbus/pull/275 from 2015, and a https://github.com/stephane/libmodbus/tree/ascii-support branch in the main project with last commits in 2022, so maybe ""they"" as domain experts had reasons (security? stability?..) to not merge this - and we would somehow compromise NUT installations by taking in ""random-quality"" code as a dependency?
WDYT: Would it make sense to expand NUT codebase to rely on potentially custom-built e.g. `libmodbus`, depending on `configure`-time tests whether the system packages provide the needed feature(s) already? There could be different ways to provide that - e.g. most likely a git submodule to our fork of libmodbus with added features (becoming a directory with a copy as part of `make dist`), but might technically be a full directory copy in the NUT sources, or a tarball to pull from location X, etc.
I suppose such extra source build would yield a `libmodbus.so/a` library file, which we may have to juggle in the installation footprint to not conflict with a system-provided one - neither to overwrite it, nor to auto-`ldload` into the same namespace by unsuspecting consumers; in this regard using a static library to build into the concerned drivers and/or `nut-scanner` (libnutscan?) may be a least-conflicting option for full-library custom builds (potentially irky for `libnutscan` and whatever its consumers might pull in - such as the system's `libmodbus` again).
Another alternative is to follow the path our PR #1671 took -- to provide the new abilities with source files in NUT codebase (`modbus-ascii.{c,h}`) which otherwise rely on standardly available `libmodbus` packages, thus with minimal long-term conflict against the OS-provided library. I guess this approach could be extended with `configure`-time checks to use either this implementation or one that hopefully eventually appears in the upstream library. It is not unlike some `common/*.c` files we have to provide string and other functions available on some platforms but not on others.
A year ago I was wary in that PR review about adding a custom semi-fork in NUT. Now that so little has changed in the library itself over the year, this in fact seems like a viable option. (There is still a question about making this secure somehow, to not introduce unreviewed errors on the HW/protocol support side...)
This partially overlaps with issue #1491 which is about general automation of such prerequisite builds (whether to facilitate NUT for Windows or other platforms with questionable support of pre-packaged build dependencies) but does not delve into customizing such dependencies' sources *for* NUT.
CC @EchterAgo @asperg @aquette @clepple ",True,"CI: add support for ""custom"" dependencies in the codebase? Case in question: how to best handle modbus extensions for NUT drivers? - I have a nagging feeling from the olden days, when many autotools projects came with `contrib`, `custom` or similar sources of third-party stuff like zlib, etc. so a single tarball could provide most of the build-required ecosystem in days before packaging... IIRC autotools were largely made because of such use-case, to consistently build such mixed codebases using either system-provided or custom-smuggled code.
Lately there are a couple of efforts relying on `libmodbus` features which are not (yet) in its mainline code and so packages - PR #2063 (for issue #139) new support for USB, and PR #1671 relies on support for ASCII vs. binary ModBus protocols. Neither is so far included into the upstream library project, and given how slow it is lately about even merging PRs or answering to issues/questions, I am not fully sure we should hold our fingers crossed and hopes to merge new drivers depending on such new features.
On the other hand, specifically for the modbus-ascii part of the question, there is some development simmering from a still-open PR https://github.com/stephane/libmodbus/pull/275 from 2015, and a https://github.com/stephane/libmodbus/tree/ascii-support branch in the main project with last commits in 2022, so maybe ""they"" as domain experts had reasons (security? stability?..) to not merge this - and we would somehow compromise NUT installations by taking in ""random-quality"" code as a dependency?
WDYT: Would it make sense to expand NUT codebase to rely on potentially custom-built e.g. `libmodbus`, depending on `configure`-time tests whether the system packages provide the needed feature(s) already? There could be different ways to provide that - e.g. most likely a git submodule to our fork of libmodbus with added features (becoming a directory with a copy as part of `make dist`), but might technically be a full directory copy in the NUT sources, or a tarball to pull from location X, etc.
I suppose such extra source build would yield a `libmodbus.so/a` library file, which we may have to juggle in the installation footprint to not conflict with a system-provided one - neither to overwrite it, nor to auto-`ldload` into the same namespace by unsuspecting consumers; in this regard using a static library to build into the concerned drivers and/or `nut-scanner` (libnutscan?) may be a least-conflicting option for full-library custom builds (potentially irky for `libnutscan` and whatever its consumers might pull in - such as the system's `libmodbus` again).
Another alternative is to follow the path our PR #1671 took -- to provide the new abilities with source files in NUT codebase (`modbus-ascii.{c,h}`) which otherwise rely on standardly available `libmodbus` packages, thus with minimal long-term conflict against the OS-provided library. I guess this approach could be extended with `configure`-time checks to use either this implementation or one that hopefully eventually appears in the upstream library. It is not unlike some `common/*.c` files we have to provide string and other functions available on some platforms but not on others.
A year ago I was wary in that PR review about adding a custom semi-fork in NUT. Now that so little has changed in the library itself over the year, this in fact seems like a viable option. (There is still a question about making this secure somehow, to not introduce unreviewed errors on the HW/protocol support side...)
This partially overlaps with issue #1491 which is about general automation of such prerequisite builds (whether to facilitate NUT for Windows or other platforms with questionable support of pre-packaged build dependencies) but does not delve into customizing such dependencies' sources *for* NUT.
CC @EchterAgo @asperg @aquette @clepple ",1,ci add support for custom dependencies in the codebase case in question how to best handle modbus extensions for nut drivers i have a nagging feeling from the olden days when many autotools projects came with contrib custom or similar sources of third party stuff like zlib etc so a single tarball could provide most of the build required ecosystem in days before packaging iirc autotools were largely made because of such use case to consistently build such mixed codebases using either system provided or custom smuggled code lately there are a couple of efforts relying on libmodbus features which are not yet in its mainline code and so packages pr for issue new support for usb and pr relies on support for ascii vs binary modbus protocols neither is so far included into the upstream library project and given how slow it is lately about even merging prs or answering to issues questions i am not fully sure we should hold our fingers crossed and hopes to merge new drivers depending on such new features on the other hand specifically for the modbus ascii part of the question there is some development simmering from a still open pr from and a branch in the main project with last commits in so maybe they as domain experts had reasons security stability to not merge this and we would somehow compromise nut installations by taking in random quality code as a dependency wdyt would it make sense to expand nut codebase to rely on potentially custom built e g libmodbus depending on configure time tests whether the system packages provide the needed feature s already there could be different ways to provide that e g most likely a git submodule to our fork of libmodbus with added features becoming a directory with a copy as part of make dist but might technically be a full directory copy in the nut sources or a tarball to pull from location x etc i suppose such extra source build would yield a libmodbus so a library file which we may have to juggle in the installation footprint to not conflict with a system provided one neither to overwrite it nor to auto ldload into the same namespace by unsuspecting consumers in this regard using a static library to build into the concerned drivers and or nut scanner libnutscan may be a least conflicting option for full library custom builds potentially irky for libnutscan and whatever its consumers might pull in such as the system s libmodbus again another alternative is to follow the path our pr took to provide the new abilities with source files in nut codebase modbus ascii c h which otherwise rely on standardly available libmodbus packages thus with minimal long term conflict against the os provided library i guess this approach could be extended with configure time checks to use either this implementation or one that hopefully eventually appears in the upstream library it is not unlike some common c files we have to provide string and other functions available on some platforms but not on others a year ago i was wary in that pr review about adding a custom semi fork in nut now that so little has changed in the library itself over the year this in fact seems like a viable option there is still a question about making this secure somehow to not introduce unreviewed errors on the hw protocol support side this partially overlaps with issue which is about general automation of such prerequisite builds whether to facilitate nut for windows or other platforms with questionable support of pre packaged build dependencies but does not delve into customizing such dependencies sources for nut cc echterago asperg aquette clepple ,1
367312,25732160569.0,IssuesEvent,2022-12-07 21:12:36,GoogleContainerTools/skaffold,https://api.github.com/repos/GoogleContainerTools/skaffold,closed,[Docs fixit] Rename `Skaffold API` to `Skaffold gRPC and HTTP service`,kind/documentation kind/todo area/docs docs-fixit,"The side-bar and pages titled `Skaffold API` can get confused with the `skaffold.yaml API`. We should consider explicitly renaming all occurrences of `Skaffold API` to `Skaffold gRPC and HTTP service` or something similar.
Also change the links from ` /../api` to ` /../server` or ` /../service`
https://skaffold.dev/docs/design/api/
https://skaffold.dev/docs/references/api/",1.0,"[Docs fixit] Rename `Skaffold API` to `Skaffold gRPC and HTTP service` - The side-bar and pages titled `Skaffold API` can get confused with the `skaffold.yaml API`. We should consider explicitly renaming all occurrences of `Skaffold API` to `Skaffold gRPC and HTTP service` or something similar.
Also change the links from ` /../api` to ` /../server` or ` /../service`
https://skaffold.dev/docs/design/api/
https://skaffold.dev/docs/references/api/",0, rename skaffold api to skaffold grpc and http service the side bar and pages titled skaffold api can get confused with the skaffold yaml api we should consider explicitly renaming all occurrences of skaffold api to skaffold grpc and http service or something similar also change the links from api to server or service ,0
187943,22046056421.0,IssuesEvent,2022-05-30 01:55:35,michaeldotson/contacts-app,https://api.github.com/repos/michaeldotson/contacts-app,opened,CVE-2020-8184 (High) detected in rack-2.0.7.gem,security vulnerability,"## CVE-2020-8184 - High Severity Vulnerability
Vulnerable Library - rack-2.0.7.gem
Rack provides a minimal, modular and adaptable interface for developing
web applications in Ruby. By wrapping HTTP requests and responses in
the simplest way possible, it unifies and distills the API for web
servers, web frameworks, and software in between (the so-called
middleware) into a single method call.
Also see https://rack.github.io/.
Library home page: https://rubygems.org/gems/rack-2.0.7.gem
Path to dependency file: /contacts-app/Gemfile.lock
Path to vulnerable library: /var/lib/gems/2.3.0/cache/rack-2.0.7.gem
Dependency Hierarchy:
- sass-rails-5.0.7.gem (Root Library)
- sprockets-rails-3.2.1.gem
- sprockets-3.7.2.gem
- :x: **rack-2.0.7.gem** (Vulnerable Library)
Vulnerability Details
A reliance on cookies without validation/integrity check security vulnerability exists in rack < 2.2.3, rack < 2.1.4 that makes it is possible for an attacker to forge a secure or host-only cookie prefix.
Publish Date: 2020-06-19
URL: CVE-2020-8184
CVSS 3 Score Details (7.5 )
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
For more information on CVSS3 Scores, click here .
Suggested Fix
Type: Upgrade version
Origin: https://groups.google.com/forum/#!topic/rubyonrails-security/OWtmozPH9Ak
Release Date: 2020-06-19
Fix Resolution: rack - 2.1.4, 2.2.3
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2020-8184 (High) detected in rack-2.0.7.gem - ## CVE-2020-8184 - High Severity Vulnerability
Vulnerable Library - rack-2.0.7.gem
Rack provides a minimal, modular and adaptable interface for developing
web applications in Ruby. By wrapping HTTP requests and responses in
the simplest way possible, it unifies and distills the API for web
servers, web frameworks, and software in between (the so-called
middleware) into a single method call.
Also see https://rack.github.io/.
Library home page: https://rubygems.org/gems/rack-2.0.7.gem
Path to dependency file: /contacts-app/Gemfile.lock
Path to vulnerable library: /var/lib/gems/2.3.0/cache/rack-2.0.7.gem
Dependency Hierarchy:
- sass-rails-5.0.7.gem (Root Library)
- sprockets-rails-3.2.1.gem
- sprockets-3.7.2.gem
- :x: **rack-2.0.7.gem** (Vulnerable Library)
Vulnerability Details
A reliance on cookies without validation/integrity check security vulnerability exists in rack < 2.2.3, rack < 2.1.4 that makes it is possible for an attacker to forge a secure or host-only cookie prefix.
Publish Date: 2020-06-19
URL: CVE-2020-8184
CVSS 3 Score Details (7.5 )
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
For more information on CVSS3 Scores, click here .
Suggested Fix
Type: Upgrade version
Origin: https://groups.google.com/forum/#!topic/rubyonrails-security/OWtmozPH9Ak
Release Date: 2020-06-19
Fix Resolution: rack - 2.1.4, 2.2.3
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve high detected in rack gem cve high severity vulnerability vulnerable library rack gem rack provides a minimal modular and adaptable interface for developing web applications in ruby by wrapping http requests and responses in the simplest way possible it unifies and distills the api for web servers web frameworks and software in between the so called middleware into a single method call also see library home page a href path to dependency file contacts app gemfile lock path to vulnerable library var lib gems cache rack gem dependency hierarchy sass rails gem root library sprockets rails gem sprockets gem x rack gem vulnerable library vulnerability details a reliance on cookies without validation integrity check security vulnerability exists in rack rack that makes it is possible for an attacker to forge a secure or host only cookie prefix publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rack step up your open source security game with mend ,0
230502,25482670019.0,IssuesEvent,2022-11-26 01:10:26,ghuangsnl/spring-boot,https://api.github.com/repos/ghuangsnl/spring-boot,opened,CVE-2022-41946 (Medium) detected in postgresql-42.2.14.jar,security vulnerability,"## CVE-2022-41946 - Medium Severity Vulnerability
Vulnerable Library - postgresql-42.2.14.jar
PostgreSQL JDBC Driver Postgresql
Library home page: https://jdbc.postgresql.org
Path to vulnerable library: /spring-boot-tests/spring-boot-smoke-tests/spring-boot-smoke-test-data-r2dbc-liquibase/build.gradle
Dependency Hierarchy:
- :x: **postgresql-42.2.14.jar** (Vulnerable Library)
Found in HEAD commit: 275c27d9dd5c88d8db426ebfb734d89d3f8e7412
Vulnerability Details
pgjdbc is an open source postgresql JDBC Driver. In affected versions a prepared statement using either `PreparedStatement.setText(int, InputStream)` or `PreparedStatemet.setBytea(int, InputStream)` will create a temporary file if the InputStream is larger than 2k. This will create a temporary file which is readable by other users on Unix like systems, but not MacOS. On Unix like systems, the system's temporary directory is shared between all users on that system. Because of this, when files and directories are written into this directory they are, by default, readable by other users on that same system. This vulnerability does not allow other users to overwrite the contents of these directories or files. This is purely an information disclosure vulnerability. Because certain JDK file system APIs were only added in JDK 1.7, this this fix is dependent upon the version of the JDK you are using. Java 1.7 and higher users: this vulnerability is fixed in 4.5.0. Java 1.6 and lower users: no patch is available. If you are unable to patch, or are stuck running on Java 1.6, specifying the java.io.tmpdir system environment variable to a directory that is exclusively owned by the executing user will mitigate this vulnerability.
Publish Date: 2022-11-23
URL: CVE-2022-41946
CVSS 3 Score Details (4.7 )
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
For more information on CVSS3 Scores, click here .
Suggested Fix
Type: Upgrade version
Origin: https://github.com/pgjdbc/pgjdbc/security/advisories/GHSA-562r-vg33-8x8h
Release Date: 2022-11-23
Fix Resolution: 42.2.26.jre6
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2022-41946 (Medium) detected in postgresql-42.2.14.jar - ## CVE-2022-41946 - Medium Severity Vulnerability
Vulnerable Library - postgresql-42.2.14.jar
PostgreSQL JDBC Driver Postgresql
Library home page: https://jdbc.postgresql.org
Path to vulnerable library: /spring-boot-tests/spring-boot-smoke-tests/spring-boot-smoke-test-data-r2dbc-liquibase/build.gradle
Dependency Hierarchy:
- :x: **postgresql-42.2.14.jar** (Vulnerable Library)
Found in HEAD commit: 275c27d9dd5c88d8db426ebfb734d89d3f8e7412
Vulnerability Details
pgjdbc is an open source postgresql JDBC Driver. In affected versions a prepared statement using either `PreparedStatement.setText(int, InputStream)` or `PreparedStatemet.setBytea(int, InputStream)` will create a temporary file if the InputStream is larger than 2k. This will create a temporary file which is readable by other users on Unix like systems, but not MacOS. On Unix like systems, the system's temporary directory is shared between all users on that system. Because of this, when files and directories are written into this directory they are, by default, readable by other users on that same system. This vulnerability does not allow other users to overwrite the contents of these directories or files. This is purely an information disclosure vulnerability. Because certain JDK file system APIs were only added in JDK 1.7, this this fix is dependent upon the version of the JDK you are using. Java 1.7 and higher users: this vulnerability is fixed in 4.5.0. Java 1.6 and lower users: no patch is available. If you are unable to patch, or are stuck running on Java 1.6, specifying the java.io.tmpdir system environment variable to a directory that is exclusively owned by the executing user will mitigate this vulnerability.
Publish Date: 2022-11-23
URL: CVE-2022-41946
CVSS 3 Score Details (4.7 )
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
For more information on CVSS3 Scores, click here .
Suggested Fix
Type: Upgrade version
Origin: https://github.com/pgjdbc/pgjdbc/security/advisories/GHSA-562r-vg33-8x8h
Release Date: 2022-11-23
Fix Resolution: 42.2.26.jre6
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve medium detected in postgresql jar cve medium severity vulnerability vulnerable library postgresql jar postgresql jdbc driver postgresql library home page a href path to vulnerable library spring boot tests spring boot smoke tests spring boot smoke test data liquibase build gradle dependency hierarchy x postgresql jar vulnerable library found in head commit a href vulnerability details pgjdbc is an open source postgresql jdbc driver in affected versions a prepared statement using either preparedstatement settext int inputstream or preparedstatemet setbytea int inputstream will create a temporary file if the inputstream is larger than this will create a temporary file which is readable by other users on unix like systems but not macos on unix like systems the system s temporary directory is shared between all users on that system because of this when files and directories are written into this directory they are by default readable by other users on that same system this vulnerability does not allow other users to overwrite the contents of these directories or files this is purely an information disclosure vulnerability because certain jdk file system apis were only added in jdk this this fix is dependent upon the version of the jdk you are using java and higher users this vulnerability is fixed in java and lower users no patch is available if you are unable to patch or are stuck running on java specifying the java io tmpdir system environment variable to a directory that is exclusively owned by the executing user will mitigate this vulnerability publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity high privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend ,0
774379,27193709092.0,IssuesEvent,2023-02-20 02:05:40,gitpod-io/gitpod,https://api.github.com/repos/gitpod-io/gitpod,closed,Hide www.gitpod-staging.com from search engines,meta: stale priority: high,"**Issue**:
we currently don't hide www.gitpod-staging.com, but it includes outdated information and visuals that would only confuse the user. It offers no value and should therefore be hidden for search engines
",1.0,"Hide www.gitpod-staging.com from search engines - **Issue**:
we currently don't hide www.gitpod-staging.com, but it includes outdated information and visuals that would only confuse the user. It offers no value and should therefore be hidden for search engines
",0,hide from search engines issue we currently don t hide but it includes outdated information and visuals that would only confuse the user it offers no value and should therefore be hidden for search engines img width alt bildschirmfoto um src ,0
715,9634809770.0,IssuesEvent,2019-05-15 22:23:20,Azure/azure-functions-host,https://api.github.com/repos/Azure/azure-functions-host,closed,[placeholder] Specialize language worker if worker runtime is not set,P2 Supportability,"For specializing a language worker that is started in placeholder mode, functions host relies on FUNCTIONS_WORKER_RUNTIME to figure out the language for an app. If this app setting is not set, language worker is not specialized.
All the tooling has been updated to set this app setting. We need handle apps that are deployed via ARM and do not set this app setting. Either
- Disable placeholders for such apps or
- Specialize language worker after functions get indexed.",True,"[placeholder] Specialize language worker if worker runtime is not set - For specializing a language worker that is started in placeholder mode, functions host relies on FUNCTIONS_WORKER_RUNTIME to figure out the language for an app. If this app setting is not set, language worker is not specialized.
All the tooling has been updated to set this app setting. We need handle apps that are deployed via ARM and do not set this app setting. Either
- Disable placeholders for such apps or
- Specialize language worker after functions get indexed.",1, specialize language worker if worker runtime is not set for specializing a language worker that is started in placeholder mode functions host relies on functions worker runtime to figure out the language for an app if this app setting is not set language worker is not specialized all the tooling has been updated to set this app setting we need handle apps that are deployed via arm and do not set this app setting either disable placeholders for such apps or specialize language worker after functions get indexed ,1
982,12501987072.0,IssuesEvent,2020-06-02 03:01:51,zcash/zcash,https://api.github.com/repos/zcash/zcash,closed,mempool_spendcoinbase.py fails on macOS,bug macOS portability,"Latest master. 100% reproducible.
```
/Users/rex/zcash on branch master rex@MacBook-Pro-2018% rm -rf cache
/Users/rex/zcash on branch master rex@MacBook-Pro-2018% PYTHON_DEBUG=1 ./qa/pull-tester/rpc-tests.sh mempool_spendcoinbase.py
=== Running testscript mempool_spendcoinbase.py ===
Initializing test directory /var/folders/pn/rswd5k6175d02hbt25tvl47r0000gn/T/testuftu2h7j
initialize_chain: bitcoind started, waiting for RPC to come up
initialize_chain: RPC succesfully started
initialize_chain: bitcoind started, waiting for RPC to come up
initialize_chain: RPC succesfully started
initialize_chain: bitcoind started, waiting for RPC to come up
initialize_chain: RPC succesfully started
initialize_chain: bitcoind started, waiting for RPC to come up
initialize_chain: RPC succesfully started
start_node: bitcoind started, waiting for RPC to come up
start_node: RPC succesfully started
Assertion failed: (left == right)
left: <6699348322060206085>
right: <0>
File ""/Users/rex/zcash/qa/rpc-tests/test_framework/test_framework.py"", line 135, in main
self.run_test()
File ""/Users/rex/zcash/qa/rpc-tests/mempool_spendcoinbase.py"", line 73, in run_test
assert_equal(mempoolinfo['usage'], 0)
File ""/Users/rex/zcash/qa/rpc-tests/test_framework/util.py"", line 507, in assert_equal
raise AssertionError(""(left == right)%s\n left: <%s>\n right: <%s>"" % (message, str(expected), str(actual)))
Stopping nodes
Cleaning up
Failed
!!! FAIL: mempool_spendcoinbase.py (53s) !!!
Tests completed: 1
successes 0; failures: 1
Failing tests: mempool_spendcoinbase.py
```",True,"mempool_spendcoinbase.py fails on macOS - Latest master. 100% reproducible.
```
/Users/rex/zcash on branch master rex@MacBook-Pro-2018% rm -rf cache
/Users/rex/zcash on branch master rex@MacBook-Pro-2018% PYTHON_DEBUG=1 ./qa/pull-tester/rpc-tests.sh mempool_spendcoinbase.py
=== Running testscript mempool_spendcoinbase.py ===
Initializing test directory /var/folders/pn/rswd5k6175d02hbt25tvl47r0000gn/T/testuftu2h7j
initialize_chain: bitcoind started, waiting for RPC to come up
initialize_chain: RPC succesfully started
initialize_chain: bitcoind started, waiting for RPC to come up
initialize_chain: RPC succesfully started
initialize_chain: bitcoind started, waiting for RPC to come up
initialize_chain: RPC succesfully started
initialize_chain: bitcoind started, waiting for RPC to come up
initialize_chain: RPC succesfully started
start_node: bitcoind started, waiting for RPC to come up
start_node: RPC succesfully started
Assertion failed: (left == right)
left: <6699348322060206085>
right: <0>
File ""/Users/rex/zcash/qa/rpc-tests/test_framework/test_framework.py"", line 135, in main
self.run_test()
File ""/Users/rex/zcash/qa/rpc-tests/mempool_spendcoinbase.py"", line 73, in run_test
assert_equal(mempoolinfo['usage'], 0)
File ""/Users/rex/zcash/qa/rpc-tests/test_framework/util.py"", line 507, in assert_equal
raise AssertionError(""(left == right)%s\n left: <%s>\n right: <%s>"" % (message, str(expected), str(actual)))
Stopping nodes
Cleaning up
Failed
!!! FAIL: mempool_spendcoinbase.py (53s) !!!
Tests completed: 1
successes 0; failures: 1
Failing tests: mempool_spendcoinbase.py
```",1,mempool spendcoinbase py fails on macos latest master reproducible users rex zcash on branch master rex macbook pro rm rf cache users rex zcash on branch master rex macbook pro python debug qa pull tester rpc tests sh mempool spendcoinbase py running testscript mempool spendcoinbase py initializing test directory var folders pn t initialize chain bitcoind started waiting for rpc to come up initialize chain rpc succesfully started initialize chain bitcoind started waiting for rpc to come up initialize chain rpc succesfully started initialize chain bitcoind started waiting for rpc to come up initialize chain rpc succesfully started initialize chain bitcoind started waiting for rpc to come up initialize chain rpc succesfully started start node bitcoind started waiting for rpc to come up start node rpc succesfully started assertion failed left right left right file users rex zcash qa rpc tests test framework test framework py line in main self run test file users rex zcash qa rpc tests mempool spendcoinbase py line in run test assert equal mempoolinfo file users rex zcash qa rpc tests test framework util py line in assert equal raise assertionerror left right s n left n right message str expected str actual stopping nodes cleaning up failed fail mempool spendcoinbase py tests completed successes failures failing tests mempool spendcoinbase py ,1
1135,14526351218.0,IssuesEvent,2020-12-14 14:07:16,IBM/FHIR,https://api.github.com/repos/IBM/FHIR,closed,Add support for AuditEvent to Log Service,cloud portability,"**Is your feature request related to a problem? Please describe.**
AuditEvent A record of an event made for purposes of maintaining a security log.
Currently, the implementation outputs in the CADF format.
**Describe the solution you'd like**
- Implement a the AuditLogService to support https://www.hl7.org/fhir/auditevent.html
- Use the Kafka Backend like WHC
- Add Tests and Coverage
**Describe alternatives you've considered**
- Current implementation is in CADF.
**Additional context**
AuditEvent - https://www.hl7.org/fhir/auditevent.html
",True,"Add support for AuditEvent to Log Service - **Is your feature request related to a problem? Please describe.**
AuditEvent A record of an event made for purposes of maintaining a security log.
Currently, the implementation outputs in the CADF format.
**Describe the solution you'd like**
- Implement a the AuditLogService to support https://www.hl7.org/fhir/auditevent.html
- Use the Kafka Backend like WHC
- Add Tests and Coverage
**Describe alternatives you've considered**
- Current implementation is in CADF.
**Additional context**
AuditEvent - https://www.hl7.org/fhir/auditevent.html
",1,add support for auditevent to log service is your feature request related to a problem please describe auditevent a record of an event made for purposes of maintaining a security log currently the implementation outputs in the cadf format describe the solution you d like implement a the auditlogservice to support use the kafka backend like whc add tests and coverage describe alternatives you ve considered current implementation is in cadf additional context auditevent ,1
247102,20957032210.0,IssuesEvent,2022-03-27 08:26:40,Leaflet/Leaflet,https://api.github.com/repos/Leaflet/Leaflet,closed,DomEvent functions not covered by tests,help wanted good first issue tests,"We have some [DomEvent](https://github.com/Leaflet/Leaflet/blob/master/src/dom/DomEvent.js) tests here: https://github.com/Leaflet/Leaflet/blob/master/spec/suites/dom/DomEventSpec.js, but not all functions are covered.
Here the list of lacking tests:
1. `on( el, types, fn, context?)` (where `types` has several events)
2. `on( el, eventMap, context?)`
3. `off( el, types, fn, context?)` (where `types` has several events)
4. `off( el, eventMap, context?)`
Note: some `on`/`off` tests implemented in #7125.
5. `stop( ev)`
6. `getMousePosition( ev, container?)`
7. `getWheelDelta( ev)`
Ref: https://leafletjs.com/reference-1.7.1.html#domevent
Note:
When implementing required test cases consider minor refactoring in #7438 (which already has tests for `disableScrollPropagation` and `disableClickPropagation`).
---
Related:
- [x] `DomEvent.DoubleTap.js` is covered by #7027.
- [x] `DomEvent.Pointer.js` is covered by #7415.",1.0,"DomEvent functions not covered by tests - We have some [DomEvent](https://github.com/Leaflet/Leaflet/blob/master/src/dom/DomEvent.js) tests here: https://github.com/Leaflet/Leaflet/blob/master/spec/suites/dom/DomEventSpec.js, but not all functions are covered.
Here the list of lacking tests:
1. `on( el, types, fn, context?)` (where `types` has several events)
2. `on( el, eventMap, context?)`
3. `off( el, types, fn, context?)` (where `types` has several events)
4. `off( el, eventMap, context?)`
Note: some `on`/`off` tests implemented in #7125.
5. `stop( ev)`
6. `getMousePosition( ev, container?)`
7. `getWheelDelta( ev)`
Ref: https://leafletjs.com/reference-1.7.1.html#domevent
Note:
When implementing required test cases consider minor refactoring in #7438 (which already has tests for `disableScrollPropagation` and `disableClickPropagation`).
---
Related:
- [x] `DomEvent.DoubleTap.js` is covered by #7027.
- [x] `DomEvent.Pointer.js` is covered by #7415.",0,domevent functions not covered by tests we have some tests here but not all functions are covered here the list of lacking tests on el types fn context where types has several events on el eventmap context off el types fn context where types has several events off el eventmap context note some on off tests implemented in stop ev getmouseposition ev container getwheeldelta ev ref note when implementing required test cases consider minor refactoring in which already has tests for disablescrollpropagation and disableclickpropagation related domevent doubletap js is covered by domevent pointer js is covered by ,0
48465,20156189593.0,IssuesEvent,2022-02-09 16:39:57,MicrosoftDocs/azure-docs,https://api.github.com/repos/MicrosoftDocs/azure-docs,closed,App Service tutorial page refers to provisioning of SQL database,app-service/svc triaged cxp doc-bug Pri1,"I think this section is not correct, mentioning provisioning of SQL database as this article is about App Service:
_Deploy app to Azure
In this step, you deploy your SQL Database-connected .NET Core application to App Service._
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: dbcc24f0-9856-a7db-f229-c383e5e5c57d
* Version Independent ID: 55d51b2a-c8fe-086e-a1c2-b0592f57967c
* Content: [Tutorial: Host RESTful API with CORS - Azure App Service](https://docs.microsoft.com/en-us/azure/app-service/app-service-web-tutorial-rest-api)
* Content Source: [articles/app-service/app-service-web-tutorial-rest-api.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/app-service/app-service-web-tutorial-rest-api.md)
* Service: **app-service**
* GitHub Login: @cephalin
* Microsoft Alias: **cephalin**",1.0,"App Service tutorial page refers to provisioning of SQL database - I think this section is not correct, mentioning provisioning of SQL database as this article is about App Service:
_Deploy app to Azure
In this step, you deploy your SQL Database-connected .NET Core application to App Service._
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: dbcc24f0-9856-a7db-f229-c383e5e5c57d
* Version Independent ID: 55d51b2a-c8fe-086e-a1c2-b0592f57967c
* Content: [Tutorial: Host RESTful API with CORS - Azure App Service](https://docs.microsoft.com/en-us/azure/app-service/app-service-web-tutorial-rest-api)
* Content Source: [articles/app-service/app-service-web-tutorial-rest-api.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/app-service/app-service-web-tutorial-rest-api.md)
* Service: **app-service**
* GitHub Login: @cephalin
* Microsoft Alias: **cephalin**",0,app service tutorial page refers to provisioning of sql database i think this section is not correct mentioning provisioning of sql database as this article is about app service deploy app to azure in this step you deploy your sql database connected net core application to app service document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service app service github login cephalin microsoft alias cephalin ,0
88,3086125511.0,IssuesEvent,2015-08-25 00:11:26,magnumripper/JohnTheRipper,https://api.github.com/repos/magnumripper/JohnTheRipper,closed,OpenCL errors on OSX 10.9.5,portability,"Hi,
System = Macbook Pro 17' - I7 - SSD - Full of Ram -Mac_OSX Mavericks , 1.9.5
a/ fresh install of bleeding last verion from today / to be noted = no tweak for optimal install on the Mac , like ; Homebrew, GCC etc...just classic fresh install
1/ macbookpro:run xxx$ ./john --list=build-info
Version: 1.8.0.6-jumbo-1-bleeding
Build: darwin13.4.0 64-bit AVX-ac
SIMD: AVX, interleaving: MD4:4 MD5:5 SHA1:2 SHA256:1 SHA512:1
$JOHN is ./
Format interface version: 13
Max. number of reported tunable costs: 3
Rec file version: REC4
Charset file version: CHR3
CHARSET_MIN: 1 (0x01)
CHARSET_MAX: 255 (0xff)
CHARSET_LENGTH: 24
Max. Markov mode level: 400
Max. Markov mode password length: 30
clang version: 6.0 (clang-600.0.57) (gcc 4.2.1 compatibility)
OpenCL library version: 1.2
Crypto library: OpenSSL
OpenSSL library version: 0009081af (loaded: 0009081df)
OpenSSL 0.9.8za 5 Jun 2014 (loaded: OpenSSL 0.9.8zd 8 Jan 2015)
File locking: fcntl()
fseek(): fseek
ftell(): ftell
fopen(): fopen
memmem(): System's
******************************
I have error once i launch the opencl test... ( one some other algo' i don't have that type of errors, ...on other i have the same type ).
1/ macbookpro:run xxxx$ ./john -form=raw-md5-opencl --test
Device 1: ATI Radeon HD 6750M
Benchmarking: Raw-MD5-opencl [MD5 OpenCL]... Build log: :257:18: warning: unknown attribute 'max_constant_size' ignored
attribute((max_constant_size (NUM_INT_KEYS * 4)))
^
DONE
Raw: 8305K c/s real, 69905K c/s virtual
2/ macbookpro:run xxxx$ ./john -form=descrypt-opencl --test
Device 1: ATI Radeon HD 6750M
Build log: :97:43: warning: unknown attribute 'max_constant_size' ignored
attribute((max_constant_size(3072)))
^
:101:43: warning: unknown attribute 'max_constant_size' ignored
attribute((max_constant_size(384)))
^
Benchmarking: descrypt-opencl, traditional crypt(3) [DES OpenCL]... DONE
Many salts: 1028K c/s real, 104857K c/s virtual
Only one salt: 1149K c/s real, 21845K c/s virtual
Thank you in advance for your help,
Denis",True,"OpenCL errors on OSX 10.9.5 - Hi,
System = Macbook Pro 17' - I7 - SSD - Full of Ram -Mac_OSX Mavericks , 1.9.5
a/ fresh install of bleeding last verion from today / to be noted = no tweak for optimal install on the Mac , like ; Homebrew, GCC etc...just classic fresh install
1/ macbookpro:run xxx$ ./john --list=build-info
Version: 1.8.0.6-jumbo-1-bleeding
Build: darwin13.4.0 64-bit AVX-ac
SIMD: AVX, interleaving: MD4:4 MD5:5 SHA1:2 SHA256:1 SHA512:1
$JOHN is ./
Format interface version: 13
Max. number of reported tunable costs: 3
Rec file version: REC4
Charset file version: CHR3
CHARSET_MIN: 1 (0x01)
CHARSET_MAX: 255 (0xff)
CHARSET_LENGTH: 24
Max. Markov mode level: 400
Max. Markov mode password length: 30
clang version: 6.0 (clang-600.0.57) (gcc 4.2.1 compatibility)
OpenCL library version: 1.2
Crypto library: OpenSSL
OpenSSL library version: 0009081af (loaded: 0009081df)
OpenSSL 0.9.8za 5 Jun 2014 (loaded: OpenSSL 0.9.8zd 8 Jan 2015)
File locking: fcntl()
fseek(): fseek
ftell(): ftell
fopen(): fopen
memmem(): System's
******************************
I have error once i launch the opencl test... ( one some other algo' i don't have that type of errors, ...on other i have the same type ).
1/ macbookpro:run xxxx$ ./john -form=raw-md5-opencl --test
Device 1: ATI Radeon HD 6750M
Benchmarking: Raw-MD5-opencl [MD5 OpenCL]... Build log: :257:18: warning: unknown attribute 'max_constant_size' ignored
attribute((max_constant_size (NUM_INT_KEYS * 4)))
^
DONE
Raw: 8305K c/s real, 69905K c/s virtual
2/ macbookpro:run xxxx$ ./john -form=descrypt-opencl --test
Device 1: ATI Radeon HD 6750M
Build log: :97:43: warning: unknown attribute 'max_constant_size' ignored
attribute((max_constant_size(3072)))
^
:101:43: warning: unknown attribute 'max_constant_size' ignored
attribute((max_constant_size(384)))
^
Benchmarking: descrypt-opencl, traditional crypt(3) [DES OpenCL]... DONE
Many salts: 1028K c/s real, 104857K c/s virtual
Only one salt: 1149K c/s real, 21845K c/s virtual
Thank you in advance for your help,
Denis",1,opencl errors on osx hi system macbook pro ssd full of ram mac osx mavericks a fresh install of bleeding last verion from today to be noted no tweak for optimal install on the mac like homebrew gcc etc just classic fresh install macbookpro run xxx john list build info version jumbo bleeding build bit avx ac simd avx interleaving john is format interface version max number of reported tunable costs rec file version charset file version charset min charset max charset length max markov mode level max markov mode password length clang version clang gcc compatibility opencl library version crypto library openssl openssl library version loaded openssl jun loaded openssl jan file locking fcntl fseek fseek ftell ftell fopen fopen memmem system s i have error once i launch the opencl test one some other algo i don t have that type of errors on other i have the same type macbookpro run xxxx john form raw opencl test device ati radeon hd benchmarking raw opencl build log warning unknown attribute max constant size ignored attribute max constant size num int keys done raw c s real c s virtual macbookpro run xxxx john form descrypt opencl test device ati radeon hd build log warning unknown attribute max constant size ignored attribute max constant size warning unknown attribute max constant size ignored attribute max constant size benchmarking descrypt opencl traditional crypt done many salts c s real c s virtual only one salt c s real c s virtual thank you in advance for your help denis,1
1852,27398674518.0,IssuesEvent,2023-02-28 22:00:53,golang/vulndb,https://api.github.com/repos/golang/vulndb,closed,x/vulndb: potential Go vuln in github.com/answerdev/answer: GHSA-6cvf-m58q-h9wf,excluded: NOT_IMPORTABLE,"In GitHub Security Advisory [GHSA-6cvf-m58q-h9wf](https://github.com/advisories/GHSA-6cvf-m58q-h9wf), there is a vulnerability in the following Go packages or modules:
| Unit | Fixed | Vulnerable Ranges |
| - | - | - |
| [github.com/answerdev/answer](https://pkg.go.dev/github.com/answerdev/answer) | 1.0.5 | < 1.0.5 |
Cross references:
- Module github.com/answerdev/answer appears in issue #1541 EFFECTIVELY_PRIVATE
- Module github.com/answerdev/answer appears in issue #1550 NOT_IMPORTABLE
- Module github.com/answerdev/answer appears in issue #1551 NOT_IMPORTABLE
- Module github.com/answerdev/answer appears in issue #1552 EFFECTIVELY_PRIVATE
- Module github.com/answerdev/answer appears in issue #1553 NOT_IMPORTABLE
- Module github.com/answerdev/answer appears in issue #1554 EFFECTIVELY_PRIVATE
See [doc/triage.md](https://github.com/golang/vulndb/blob/master/doc/triage.md) for instructions on how to triage this report.
```
modules:
- module: github.com/answerdev/answer
versions:
- fixed: 1.0.5
packages:
- package: github.com/answerdev/answer
description: Cross-site Scripting (XSS) - Stored in GitHub repository answerdev/answer
prior to 1.0.5.
cves:
- CVE-2023-0934
ghsas:
- GHSA-6cvf-m58q-h9wf
references:
- web: https://nvd.nist.gov/vuln/detail/CVE-2023-0934
- fix: https://github.com/answerdev/answer/commit/edc06942d51fa8e56a134c5c7e5c8826d9260da0
- web: https://huntr.dev/bounties/cd213098-5bab-487f-82c7-13698ad43b51
- advisory: https://github.com/advisories/GHSA-6cvf-m58q-h9wf
```",True,"x/vulndb: potential Go vuln in github.com/answerdev/answer: GHSA-6cvf-m58q-h9wf - In GitHub Security Advisory [GHSA-6cvf-m58q-h9wf](https://github.com/advisories/GHSA-6cvf-m58q-h9wf), there is a vulnerability in the following Go packages or modules:
| Unit | Fixed | Vulnerable Ranges |
| - | - | - |
| [github.com/answerdev/answer](https://pkg.go.dev/github.com/answerdev/answer) | 1.0.5 | < 1.0.5 |
Cross references:
- Module github.com/answerdev/answer appears in issue #1541 EFFECTIVELY_PRIVATE
- Module github.com/answerdev/answer appears in issue #1550 NOT_IMPORTABLE
- Module github.com/answerdev/answer appears in issue #1551 NOT_IMPORTABLE
- Module github.com/answerdev/answer appears in issue #1552 EFFECTIVELY_PRIVATE
- Module github.com/answerdev/answer appears in issue #1553 NOT_IMPORTABLE
- Module github.com/answerdev/answer appears in issue #1554 EFFECTIVELY_PRIVATE
See [doc/triage.md](https://github.com/golang/vulndb/blob/master/doc/triage.md) for instructions on how to triage this report.
```
modules:
- module: github.com/answerdev/answer
versions:
- fixed: 1.0.5
packages:
- package: github.com/answerdev/answer
description: Cross-site Scripting (XSS) - Stored in GitHub repository answerdev/answer
prior to 1.0.5.
cves:
- CVE-2023-0934
ghsas:
- GHSA-6cvf-m58q-h9wf
references:
- web: https://nvd.nist.gov/vuln/detail/CVE-2023-0934
- fix: https://github.com/answerdev/answer/commit/edc06942d51fa8e56a134c5c7e5c8826d9260da0
- web: https://huntr.dev/bounties/cd213098-5bab-487f-82c7-13698ad43b51
- advisory: https://github.com/advisories/GHSA-6cvf-m58q-h9wf
```",1,x vulndb potential go vuln in github com answerdev answer ghsa in github security advisory there is a vulnerability in the following go packages or modules unit fixed vulnerable ranges cross references module github com answerdev answer appears in issue effectively private module github com answerdev answer appears in issue not importable module github com answerdev answer appears in issue not importable module github com answerdev answer appears in issue effectively private module github com answerdev answer appears in issue not importable module github com answerdev answer appears in issue effectively private see for instructions on how to triage this report modules module github com answerdev answer versions fixed packages package github com answerdev answer description cross site scripting xss stored in github repository answerdev answer prior to cves cve ghsas ghsa references web fix web advisory ,1
651,8689291659.0,IssuesEvent,2018-12-03 18:17:43,chapel-lang/chapel,https://api.github.com/repos/chapel-lang/chapel,closed,Can't build test-venv from inside a virtualenv,area: BTR type: Bug type: Portability,"We can't build our test-venv (or chpldoc-venv) if trying to build from within a virtualenv.
```sh
$ virtualenv chpl-test
$ source chpl-test/bin/activate
$ make test-venv
...
Installing local copy of pip with get-pip.py from https://bootstrap.pypa.io/get-pip.py
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1622k 100 1622k 0 0 458k 0 0:00:03 0:00:03 --:--:-- 458k
Can not perform a '--user' install. User site-packages are not visible in this virtualenv.
```
`--user` isn't supported when running within a virtualenv, but simply not throwing `--user` leads to other errors because pip isn't installed where we're expecting it.",True,"Can't build test-venv from inside a virtualenv - We can't build our test-venv (or chpldoc-venv) if trying to build from within a virtualenv.
```sh
$ virtualenv chpl-test
$ source chpl-test/bin/activate
$ make test-venv
...
Installing local copy of pip with get-pip.py from https://bootstrap.pypa.io/get-pip.py
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1622k 100 1622k 0 0 458k 0 0:00:03 0:00:03 --:--:-- 458k
Can not perform a '--user' install. User site-packages are not visible in this virtualenv.
```
`--user` isn't supported when running within a virtualenv, but simply not throwing `--user` leads to other errors because pip isn't installed where we're expecting it.",1,can t build test venv from inside a virtualenv we can t build our test venv or chpldoc venv if trying to build from within a virtualenv sh virtualenv chpl test source chpl test bin activate make test venv installing local copy of pip with get pip py from total received xferd average speed time time time current dload upload total spent left speed can not perform a user install user site packages are not visible in this virtualenv user isn t supported when running within a virtualenv but simply not throwing user leads to other errors because pip isn t installed where we re expecting it ,1
628,8454511616.0,IssuesEvent,2018-10-21 04:09:04,meetrp/psa,https://api.github.com/repos/meetrp/psa,opened,Support for py3,Supportability enhancement,"**Is your feature request related to a problem? Please describe.**
Currently the setup.py test fails on python3.
**Describe the solution you'd like**
Enable the support in the tox
",True,"Support for py3 - **Is your feature request related to a problem? Please describe.**
Currently the setup.py test fails on python3.
**Describe the solution you'd like**
Enable the support in the tox
",1,support for is your feature request related to a problem please describe currently the setup py test fails on describe the solution you d like enable the support in the tox ,1
1983,31818439162.0,IssuesEvent,2023-09-13 22:51:18,chapel-lang/chapel,https://api.github.com/repos/chapel-lang/chapel,closed,libjemalloc library conflict on InfiniBand systems,type: Bug area: Third-Party area: Makefiles / Scripts user issue type: Portability,"On InfiniBand clusters with `libjemalloc.*` in the system paths, as of Chapel 1.31.0, the third-party jemalloc install does not get linked when compiling Chapel codes, as the linker picks the system library first. This results in undefined references to `chpl_je_mallocx`, `chpl_je_dallocx`, `chpl_je_sallocx` and others defined in the third-party jemalloc library when trying to compile any Chapel code.
This issue first appeared in Chapel 1.26.0 after reordering the -L flags.
More details can be found in the discussion https://chapel.discourse.group/t/undefined-reference-to-jemalloc/26323. A potential fix mentioned there would be to rename third-party bundled libraries to something more unique.",True,"libjemalloc library conflict on InfiniBand systems - On InfiniBand clusters with `libjemalloc.*` in the system paths, as of Chapel 1.31.0, the third-party jemalloc install does not get linked when compiling Chapel codes, as the linker picks the system library first. This results in undefined references to `chpl_je_mallocx`, `chpl_je_dallocx`, `chpl_je_sallocx` and others defined in the third-party jemalloc library when trying to compile any Chapel code.
This issue first appeared in Chapel 1.26.0 after reordering the -L flags.
More details can be found in the discussion https://chapel.discourse.group/t/undefined-reference-to-jemalloc/26323. A potential fix mentioned there would be to rename third-party bundled libraries to something more unique.",1,libjemalloc library conflict on infiniband systems on infiniband clusters with libjemalloc in the system paths as of chapel the third party jemalloc install does not get linked when compiling chapel codes as the linker picks the system library first this results in undefined references to chpl je mallocx chpl je dallocx chpl je sallocx and others defined in the third party jemalloc library when trying to compile any chapel code this issue first appeared in chapel after reordering the l flags more details can be found in the discussion a potential fix mentioned there would be to rename third party bundled libraries to something more unique ,1
267823,28509241600.0,IssuesEvent,2023-04-19 01:47:47,dpteam/RK3188_TABLET,https://api.github.com/repos/dpteam/RK3188_TABLET,closed,CVE-2012-6548 (Low) detected in linux-yocto-4.12v3.1.10 - autoclosed,Mend: dependency security vulnerability,"## CVE-2012-6548 - Low Severity Vulnerability
Vulnerable Library - linux-yocto-4.12v3.1.10
Linux 4.12 Embedded Kernel
Library home page: https://git.yoctoproject.org/git/linux-yocto-4.12
Found in HEAD commit: 0c501f5a0fd72c7b2ac82904235363bd44fd8f9e
Found in base branch: master
Vulnerable Source Files (0)
Vulnerability Details
The udf_encode_fh function in fs/udf/namei.c in the Linux kernel before 3.6 does not initialize a certain structure member, which allows local users to obtain sensitive information from kernel heap memory via a crafted application.
Publish Date: 2013-03-15
URL: CVE-2012-6548
CVSS 3 Score Details (2.9 )
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: None
For more information on CVSS3 Scores, click here .
Suggested Fix
Type: Upgrade version
Origin: https://nvd.nist.gov/vuln/detail/CVE-2012-6548
Release Date: 2013-03-15
Fix Resolution: 3.6
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2012-6548 (Low) detected in linux-yocto-4.12v3.1.10 - autoclosed - ## CVE-2012-6548 - Low Severity Vulnerability
Vulnerable Library - linux-yocto-4.12v3.1.10
Linux 4.12 Embedded Kernel
Library home page: https://git.yoctoproject.org/git/linux-yocto-4.12
Found in HEAD commit: 0c501f5a0fd72c7b2ac82904235363bd44fd8f9e
Found in base branch: master
Vulnerable Source Files (0)
Vulnerability Details
The udf_encode_fh function in fs/udf/namei.c in the Linux kernel before 3.6 does not initialize a certain structure member, which allows local users to obtain sensitive information from kernel heap memory via a crafted application.
Publish Date: 2013-03-15
URL: CVE-2012-6548
CVSS 3 Score Details (2.9 )
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: None
For more information on CVSS3 Scores, click here .
Suggested Fix
Type: Upgrade version
Origin: https://nvd.nist.gov/vuln/detail/CVE-2012-6548
Release Date: 2013-03-15
Fix Resolution: 3.6
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve low detected in linux yocto autoclosed cve low severity vulnerability vulnerable library linux yocto linux embedded kernel library home page a href found in head commit a href found in base branch master vulnerable source files vulnerability details the udf encode fh function in fs udf namei c in the linux kernel before does not initialize a certain structure member which allows local users to obtain sensitive information from kernel heap memory via a crafted application publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend ,0
5880,32004100940.0,IssuesEvent,2023-09-21 13:55:21,MozillaFoundation/foundation.mozilla.org,https://api.github.com/repos/MozillaFoundation/foundation.mozilla.org,closed,"Fix exception type and `org` in `review_app_admin.py`, replace `GITHUB_TOKEN`",bug engineering maintain,"### Describe the bug
Whether through some sort of deprecation or some other nonsense, the review apps haven't been able to create an admin user and the app crashes when it can't. It's because the `GITHUB_TOKEN` environment variable for the review apps on Heroku is expired, so this needs replaced. Also, the `org` variable in `review_app_admin.py` is set to our old org of `mozilla` and not `MozillaFoundation` and needs to be changed before this 301 disappears. We're also trying to catch `ObjectDoesNotExist`, so let's maybe get more specific with `User.DoesNotExist` to handle the case of no admin user existing.
Example:
```
Creating Donate Site record in Wagtail
[32;1mDone
This creates problems in EKS when new instance of the pod is created in different AZ than initial one as PVs cannot be mounted across different AZs.
Until that time we need troubleshooting on how to use the [pvc cleaner script](https://github.com/SumoLogic/sumologic-kubernetes-tools/tree/main/src/examples) updated to be used with OTC.",1.0,"Provide information on using the pvc cleaner script - Kubernetes cannot delete persistent volumes for statefulsets yet.
It's available as [alpha feature since 1.23 and should be available as beta since 1.26](https://github.com/kubernetes/enhancements/issues/1847)
This creates problems in EKS when new instance of the pod is created in different AZ than initial one as PVs cannot be mounted across different AZs.
Until that time we need troubleshooting on how to use the [pvc cleaner script](https://github.com/SumoLogic/sumologic-kubernetes-tools/tree/main/src/examples) updated to be used with OTC.",0,provide information on using the pvc cleaner script kubernetes cannot delete persistent volumes for statefulsets yet it s available as this creates problems in eks when new instance of the pod is created in different az than initial one as pvs cannot be mounted across different azs until that time we need troubleshooting on how to use the updated to be used with otc ,0
2352,7704872817.0,IssuesEvent,2018-05-21 13:48:38,intel-hpdd/intel-manager-for-lustre,https://api.github.com/repos/intel-hpdd/intel-manager-for-lustre,closed,Replace usage of block_devices.normalized_device_table in lustre.py,enhancement pinned reactive architecture,"when removing deprecated code due to changes as part of #440 it became apparent that
https://github.com/intel-hpdd/intel-manager-for-lustre/blob/11d1f5d6215bb553e2c3d458405dabe0b7874406/chroma-agent/chroma_agent/device_plugins/linux_components/block_devices.py#L15
cannot be removed despite of it longer being referenced from
https://github.com/intel-hpdd/intel-manager-for-lustre/blob/11d1f5d6215bb553e2c3d458405dabe0b7874406/chroma-agent/chroma_agent/device_plugins/linux.py#L1-L105
because it is still being referenced inside
https://github.com/intel-hpdd/intel-manager-for-lustre/blob/11d1f5d6215bb553e2c3d458405dabe0b7874406/chroma-agent/chroma_agent/device_plugins/lustre.py#L20
in order to get access to a populated normalized device table (ndt).
The relevant information required in lustre should be provided by the `device-aggregator` in some other way and then the `block_devices.py` file can be removed from the agent.",1.0,"Replace usage of block_devices.normalized_device_table in lustre.py - when removing deprecated code due to changes as part of #440 it became apparent that
https://github.com/intel-hpdd/intel-manager-for-lustre/blob/11d1f5d6215bb553e2c3d458405dabe0b7874406/chroma-agent/chroma_agent/device_plugins/linux_components/block_devices.py#L15
cannot be removed despite of it longer being referenced from
https://github.com/intel-hpdd/intel-manager-for-lustre/blob/11d1f5d6215bb553e2c3d458405dabe0b7874406/chroma-agent/chroma_agent/device_plugins/linux.py#L1-L105
because it is still being referenced inside
https://github.com/intel-hpdd/intel-manager-for-lustre/blob/11d1f5d6215bb553e2c3d458405dabe0b7874406/chroma-agent/chroma_agent/device_plugins/lustre.py#L20
in order to get access to a populated normalized device table (ndt).
The relevant information required in lustre should be provided by the `device-aggregator` in some other way and then the `block_devices.py` file can be removed from the agent.",0,replace usage of block devices normalized device table in lustre py when removing deprecated code due to changes as part of it became apparent that cannot be removed despite of it longer being referenced from because it is still being referenced inside in order to get access to a populated normalized device table ndt the relevant information required in lustre should be provided by the device aggregator in some other way and then the block devices py file can be removed from the agent ,0
1524,22156062272.0,IssuesEvent,2022-06-03 22:52:14,apache/beam,https://api.github.com/repos/apache/beam,opened,Intuitive default behavior for sdk_location pipeline option,portability P3 improvement sdk-py-harness,"The current default value of ""default"" implies a Dataflow specific behavior of the artifact stager. The same stager is also used by the portable runner, which has to specify a value ""container"", which actually means to not stage the SDK. That should be the default behavior and the default value for the sdk_location should be None. The Dataflow runner can then specify a value such as ""pypi"" which conveys more closely the expected behavior.
Imported from Jira [BEAM-5525](https://issues.apache.org/jira/browse/BEAM-5525). Original Jira may contain additional context.
Reported by: thw.",True,"Intuitive default behavior for sdk_location pipeline option - The current default value of ""default"" implies a Dataflow specific behavior of the artifact stager. The same stager is also used by the portable runner, which has to specify a value ""container"", which actually means to not stage the SDK. That should be the default behavior and the default value for the sdk_location should be None. The Dataflow runner can then specify a value such as ""pypi"" which conveys more closely the expected behavior.
Imported from Jira [BEAM-5525](https://issues.apache.org/jira/browse/BEAM-5525). Original Jira may contain additional context.
Reported by: thw.",1,intuitive default behavior for sdk location pipeline option the current default value of default implies a dataflow specific behavior of the artifact stager the same stager is also used by the portable runner which has to specify a value container which actually means to not stage the sdk that should be the default behavior and the default value for the sdk location should be none the dataflow runner can then specify a value such as pypi which conveys more closely the expected behavior imported from jira original jira may contain additional context reported by thw ,1
26714,7859552342.0,IssuesEvent,2018-06-21 16:57:55,mono/monodevelop,https://api.github.com/repos/mono/monodevelop,opened,Next/previous in structured build output should be insensitive when no results,Area: Structured Build Output vs-sync,"The next/previous buttons in the structured build output should be displayed as insensitive when there are no results. The green icons look like they're sensitive.
",1.0,"Next/previous in structured build output should be insensitive when no results - The next/previous buttons in the structured build output should be displayed as insensitive when there are no results. The green icons look like they're sensitive.
",0,next previous in structured build output should be insensitive when no results the next previous buttons in the structured build output should be displayed as insensitive when there are no results the green icons look like they re sensitive img width alt screen shot at pm src ,0
118951,4758281619.0,IssuesEvent,2016-10-24 19:01:51,michaeljcalkins/rangersteve-ideas,https://api.github.com/repos/michaeljcalkins/rangersteve-ideas,closed,IDEA: Buying guns using your score,Priority: Medium Status: Idea Time: > Week Type: Enhancement,"- Everyone starts out with 0 score, when you switch guns it costs you your score
- AK is free
- When you kill AI Turret or enemy you'll gain score that you can use to buy guns",1.0,"IDEA: Buying guns using your score - - Everyone starts out with 0 score, when you switch guns it costs you your score
- AK is free
- When you kill AI Turret or enemy you'll gain score that you can use to buy guns",0,idea buying guns using your score everyone starts out with score when you switch guns it costs you your score ak is free when you kill ai turret or enemy you ll gain score that you can use to buy guns,0
822,10587556402.0,IssuesEvent,2019-10-08 22:31:11,microsoft/botframework-sdk,https://api.github.com/repos/microsoft/botframework-sdk,closed,Add UserVoice to our milestone planning process,4.6 ridealong supportability,"Once our community engineer starts, I'd like to use UserVoice or something similar to help us engage the community around milestone feature work",True,"Add UserVoice to our milestone planning process - Once our community engineer starts, I'd like to use UserVoice or something similar to help us engage the community around milestone feature work",1,add uservoice to our milestone planning process once our community engineer starts i d like to use uservoice or something similar to help us engage the community around milestone feature work,1
323,5892944155.0,IssuesEvent,2017-05-17 20:43:16,esnet/iperf,https://api.github.com/repos/esnet/iperf,closed,"Stuck at ""connected to"" (High Load UDP Traffic)",portability question,"Hello,
I have noticed that sometimes when generating more downstream UDP traffic (e.g.: 1 Gbps) than the receiving link can handle (e.g.: 100 Mbps) iperf3 gets stuck at ""connected to"" as depicted on the picture bellow.

Sometimes it stays there forever, and the work-around consists on stopping and restarting the test. Other times it unstucks itself after about 10-20 seconds reporting the expected value but, with a different packet loss at the first seconds when comparing to the following time intervals:
- Case 1 - Faulty Scenario (Stuck at connected to for a couple of seconds)

- Case 2 - Working Scenario (Test started right away)

- Network Topology:

In aid of understanding the root cause of this issue I have inspected the network traffic (at the client side) and found out that iPerf3 Client only moves from the ""connect to"" state to start presenting the time intervals after it receives a TCP Packet from the server containing the following payload ""0102"" as depicted on the picture bellow (which in this case took 15 seconds):

I have also collected a network trace at the server side and found out that because the server starts generating the UDP Traffic before it sends the TCP Packet. As a result because there is less available bandwidth at the client side the packet gets discarded along the way and iperf3 Client only unstucks itself if it is lucky enough to receive a TCP Retransmission which, due to the ""TCP Exponential Back-off Algorithm"" may take forever.
- TCP Retransmissions

Please also note that packet 11 (highlighted in red) was the last packet before UDP traffic and the ""TCP Packet"" (highlighted in green) only shows up at frame 15958, which means that a total of 15946 UDP packets were sent before the ""start"" signalisation.
Is anyone else struggling with the same issue? Have you considered only generating the traffic after this packet is sent?
Kind regards,
João
",True,"Stuck at ""connected to"" (High Load UDP Traffic) - Hello,
I have noticed that sometimes when generating more downstream UDP traffic (e.g.: 1 Gbps) than the receiving link can handle (e.g.: 100 Mbps) iperf3 gets stuck at ""connected to"" as depicted on the picture bellow.

Sometimes it stays there forever, and the work-around consists on stopping and restarting the test. Other times it unstucks itself after about 10-20 seconds reporting the expected value but, with a different packet loss at the first seconds when comparing to the following time intervals:
- Case 1 - Faulty Scenario (Stuck at connected to for a couple of seconds)

- Case 2 - Working Scenario (Test started right away)

- Network Topology:

In aid of understanding the root cause of this issue I have inspected the network traffic (at the client side) and found out that iPerf3 Client only moves from the ""connect to"" state to start presenting the time intervals after it receives a TCP Packet from the server containing the following payload ""0102"" as depicted on the picture bellow (which in this case took 15 seconds):

I have also collected a network trace at the server side and found out that because the server starts generating the UDP Traffic before it sends the TCP Packet. As a result because there is less available bandwidth at the client side the packet gets discarded along the way and iperf3 Client only unstucks itself if it is lucky enough to receive a TCP Retransmission which, due to the ""TCP Exponential Back-off Algorithm"" may take forever.
- TCP Retransmissions

Please also note that packet 11 (highlighted in red) was the last packet before UDP traffic and the ""TCP Packet"" (highlighted in green) only shows up at frame 15958, which means that a total of 15946 UDP packets were sent before the ""start"" signalisation.
Is anyone else struggling with the same issue? Have you considered only generating the traffic after this packet is sent?
Kind regards,
João
",1,stuck at connected to high load udp traffic hello i have noticed that sometimes when generating more downstream udp traffic e g gbps than the receiving link can handle e g mbps gets stuck at connected to as depicted on the picture bellow sometimes it stays there forever and the work around consists on stopping and restarting the test other times it unstucks itself after about seconds reporting the expected value but with a different packet loss at the first seconds when comparing to the following time intervals case faulty scenario stuck at connected to for a couple of seconds case working scenario test started right away network topology in aid of understanding the root cause of this issue i have inspected the network traffic at the client side and found out that client only moves from the connect to state to start presenting the time intervals after it receives a tcp packet from the server containing the following payload as depicted on the picture bellow which in this case took seconds i have also collected a network trace at the server side and found out that because the server starts generating the udp traffic before it sends the tcp packet as a result because there is less available bandwidth at the client side the packet gets discarded along the way and client only unstucks itself if it is lucky enough to receive a tcp retransmission which due to the tcp exponential back off algorithm may take forever tcp retransmissions please also note that packet highlighted in red was the last packet before udp traffic and the tcp packet highlighted in green only shows up at frame which means that a total of udp packets were sent before the start signalisation is anyone else struggling with the same issue have you considered only generating the traffic after this packet is sent kind regards joão ,1
1945,30566056434.0,IssuesEvent,2023-07-20 17:51:53,chapel-lang/chapel,https://api.github.com/repos/chapel-lang/chapel,opened,Support multiple GPU architectures in compilation,type: Feature Request user issue type: Portability area: GPU Support,"This is a somewhat long-standing wish which also came up in a setting where we needed multiple architectures because of the existence of an integrated GPU: https://github.com/chapel-lang/chapel/issues/22754
It is typical to compile for multiple GPU virtual/real architectures. Currently, we don't support that. For NVIDIA we default to `sm_60`, for AMD we ask user to set `CHPL_GPU_ARCH`, which should be a single architecture. Finding ideal architecture code is difficult on an HPC system where the login node doesn't have GPUs and the GPU that you'll run on may differ based on the partition you submit your job to. For NVIDIA at least we can think of JITing for the architecture that you're running on but that's not something we're planning to do in the near term.
We should improve our `CHPL_GPU_ARCH` string processing in the compiler to allow comma-separated lists. ",True,"Support multiple GPU architectures in compilation - This is a somewhat long-standing wish which also came up in a setting where we needed multiple architectures because of the existence of an integrated GPU: https://github.com/chapel-lang/chapel/issues/22754
It is typical to compile for multiple GPU virtual/real architectures. Currently, we don't support that. For NVIDIA we default to `sm_60`, for AMD we ask user to set `CHPL_GPU_ARCH`, which should be a single architecture. Finding ideal architecture code is difficult on an HPC system where the login node doesn't have GPUs and the GPU that you'll run on may differ based on the partition you submit your job to. For NVIDIA at least we can think of JITing for the architecture that you're running on but that's not something we're planning to do in the near term.
We should improve our `CHPL_GPU_ARCH` string processing in the compiler to allow comma-separated lists. ",1,support multiple gpu architectures in compilation this is a somewhat long standing wish which also came up in a setting where we needed multiple architectures because of the existence of an integrated gpu it is typical to compile for multiple gpu virtual real architectures currently we don t support that for nvidia we default to sm for amd we ask user to set chpl gpu arch which should be a single architecture finding ideal architecture code is difficult on an hpc system where the login node doesn t have gpus and the gpu that you ll run on may differ based on the partition you submit your job to for nvidia at least we can think of jiting for the architecture that you re running on but that s not something we re planning to do in the near term we should improve our chpl gpu arch string processing in the compiler to allow comma separated lists ,1
66715,12814643740.0,IssuesEvent,2020-07-04 20:05:03,joomla/joomla-cms,https://api.github.com/repos/joomla/joomla-cms,closed,500 error Solved,No Code Attached Yet,"### Steps to reproduce the issue
Setup joomla 4 beta2 tried to install akeeba backup after it times out get this error
### Expected result
install component and go back to install screen
### Actual result
Oops! An Error Occurred
The server returned a ""500 Whoops, looks like something went wrong."".
will not even go to front end to view web site
### System information (as much as possible)
php 7.4.6
Apache/2.4.43
10.4.11-MariaDB
localhost xampp
windows 10 pro
### Additional comments
okay found problem i deleted the tables for ganty 5 install now have site working now akeeba installed
",1.0,"500 error Solved - ### Steps to reproduce the issue
Setup joomla 4 beta2 tried to install akeeba backup after it times out get this error
### Expected result
install component and go back to install screen
### Actual result
Oops! An Error Occurred
The server returned a ""500 Whoops, looks like something went wrong."".
will not even go to front end to view web site
### System information (as much as possible)
php 7.4.6
Apache/2.4.43
10.4.11-MariaDB
localhost xampp
windows 10 pro
### Additional comments
okay found problem i deleted the tables for ganty 5 install now have site working now akeeba installed
",0, error solved steps to reproduce the issue setup joomla tried to install akeeba backup after it times out get this error expected result install component and go back to install screen actual result oops an error occurred the server returned a whoops looks like something went wrong will not even go to front end to view web site system information as much as possible php apache mariadb localhost xampp windows pro additional comments okay found problem i deleted the tables for ganty install now have site working now akeeba installed ,0
643,8615677621.0,IssuesEvent,2018-11-19 21:19:40,chapel-lang/chapel,https://api.github.com/repos/chapel-lang/chapel,closed,Assist with Chapel AI workflow integration,type: Portability,Assist other Cray developers integrate Chapel AI workflow into other Cray platforms.,True,Assist with Chapel AI workflow integration - Assist other Cray developers integrate Chapel AI workflow into other Cray platforms.,1,assist with chapel ai workflow integration assist other cray developers integrate chapel ai workflow into other cray platforms ,1
1442,21676538638.0,IssuesEvent,2022-05-08 20:05:16,damccorm/test-migration-target,https://api.github.com/repos/damccorm/test-migration-target,opened,Configure logging level in portable Spark runner,P3 runner-spark improvement portability-spark,"log4j:WARN No appenders could be found for logger (org.apache.beam.vendor.grpc.v1p13p1.io.netty.util.internal.logging.InternalLoggerFactory).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
[1] doesn't seem to have any effect.
[1] [https://github.com/apache/beam/blob/c9fb261bc7666788402840bb6ce1b0ce2fd445d1/runners/spark/job-server/build.gradle#L80-L81](https://github.com/apache/beam/blob/c9fb261bc7666788402840bb6ce1b0ce2fd445d1/runners/spark/job-server/build.gradle#L80-L81)
Imported from Jira [BEAM-7805](https://issues.apache.org/jira/browse/BEAM-7805). Original Jira may contain additional context.
Reported by: ibzib.",True,"Configure logging level in portable Spark runner - log4j:WARN No appenders could be found for logger (org.apache.beam.vendor.grpc.v1p13p1.io.netty.util.internal.logging.InternalLoggerFactory).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
[1] doesn't seem to have any effect.
[1] [https://github.com/apache/beam/blob/c9fb261bc7666788402840bb6ce1b0ce2fd445d1/runners/spark/job-server/build.gradle#L80-L81](https://github.com/apache/beam/blob/c9fb261bc7666788402840bb6ce1b0ce2fd445d1/runners/spark/job-server/build.gradle#L80-L81)
Imported from Jira [BEAM-7805](https://issues.apache.org/jira/browse/BEAM-7805). Original Jira may contain additional context.
Reported by: ibzib.",1,configure logging level in portable spark runner warn no appenders could be found for logger org apache beam vendor grpc io netty util internal logging internalloggerfactory warn please initialize the system properly warn see for more info using spark s default profile org apache spark defaults properties doesn t seem to have any effect imported from jira original jira may contain additional context reported by ibzib ,1
283335,30913272393.0,IssuesEvent,2023-08-05 01:31:02,panasalap/linux-4.19.72_mlme,https://api.github.com/repos/panasalap/linux-4.19.72_mlme,reopened,CVE-2022-3625 (High) detected in linux-yoctov5.4.51,Mend: dependency security vulnerability,"## CVE-2022-3625 - High Severity Vulnerability
Vulnerable Library - linux-yoctov5.4.51
Yocto Linux Embedded kernel
Library home page: https://git.yoctoproject.org/git/linux-yocto
Found in base branch: master
Vulnerable Source Files (2)
/net/core/devlink.c
/net/core/devlink.c
Vulnerability Details
A vulnerability was found in Linux Kernel. It has been classified as critical. This affects the function devlink_param_set/devlink_param_get of the file net/core/devlink.c of the component IPsec. The manipulation leads to use after free. It is recommended to apply a patch to fix this issue. The identifier VDB-211929 was assigned to this vulnerability.
Publish Date: 2022-10-21
URL: CVE-2022-3625
CVSS 3 Score Details (7.8 )
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
For more information on CVSS3 Scores, click here .
Suggested Fix
Type: Upgrade version
Origin: https://www.linuxkernelcves.com/cves/CVE-2022-3625
Release Date: 2022-10-21
Fix Resolution: v5.4.211,v5.10.138,v5.15.63
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2022-3625 (High) detected in linux-yoctov5.4.51 - ## CVE-2022-3625 - High Severity Vulnerability
Vulnerable Library - linux-yoctov5.4.51
Yocto Linux Embedded kernel
Library home page: https://git.yoctoproject.org/git/linux-yocto
Found in base branch: master
Vulnerable Source Files (2)
/net/core/devlink.c
/net/core/devlink.c
Vulnerability Details
A vulnerability was found in Linux Kernel. It has been classified as critical. This affects the function devlink_param_set/devlink_param_get of the file net/core/devlink.c of the component IPsec. The manipulation leads to use after free. It is recommended to apply a patch to fix this issue. The identifier VDB-211929 was assigned to this vulnerability.
Publish Date: 2022-10-21
URL: CVE-2022-3625
CVSS 3 Score Details (7.8 )
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
For more information on CVSS3 Scores, click here .
Suggested Fix
Type: Upgrade version
Origin: https://www.linuxkernelcves.com/cves/CVE-2022-3625
Release Date: 2022-10-21
Fix Resolution: v5.4.211,v5.10.138,v5.15.63
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve high detected in linux cve high severity vulnerability vulnerable library linux yocto linux embedded kernel library home page a href found in base branch master vulnerable source files net core devlink c net core devlink c vulnerability details a vulnerability was found in linux kernel it has been classified as critical this affects the function devlink param set devlink param get of the file net core devlink c of the component ipsec the manipulation leads to use after free it is recommended to apply a patch to fix this issue the identifier vdb was assigned to this vulnerability publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend ,0
1225,16045180849.0,IssuesEvent,2021-04-22 12:54:06,AzureAD/microsoft-authentication-library-for-dotnet,https://api.github.com/repos/AzureAD/microsoft-authentication-library-for-dotnet,closed,Provide more accurate Windows OS information in client info headers,Fixed P2 Supportability bug enhancement requires more info,"**Is your feature request related to a problem? Please describe.**
This feature request is in response to a Microsoft internal customer-reported [incident](https://icm.ad.msft.net/imp/v3/incidents/details/185949148/home). The customer noticed that when making requests on Windows Server 2012, in the sign-in activity logs, they would see OS = Windows 8 instead. After internal discussion, it was determined this was due to MSAL.NET using `Environment.OSVersion.ToString();` to report the OS version, which returns the exact same value for Win8 and Win2K12. [Source](https://docs.microsoft.com/en-us/windows/win32/sysinfo/operating-system-version).
`Environment.OSVersion` is also not a reliable way to accurately determine the real OS. The recommended approach is to use the [version helper APIs](https://docs.microsoft.com/en-us/windows/win32/sysinfo/version-helper-apis).
**Describe the solution you'd like**
MSAL.NET should follow the recommendations in the docs to more accurately report the OS to avoid confusing downstream effects.
**Describe alternatives you've considered**
None
**Additional context**
ping anyoung@microsoft for more context
",True,"Provide more accurate Windows OS information in client info headers - **Is your feature request related to a problem? Please describe.**
This feature request is in response to a Microsoft internal customer-reported [incident](https://icm.ad.msft.net/imp/v3/incidents/details/185949148/home). The customer noticed that when making requests on Windows Server 2012, in the sign-in activity logs, they would see OS = Windows 8 instead. After internal discussion, it was determined this was due to MSAL.NET using `Environment.OSVersion.ToString();` to report the OS version, which returns the exact same value for Win8 and Win2K12. [Source](https://docs.microsoft.com/en-us/windows/win32/sysinfo/operating-system-version).
`Environment.OSVersion` is also not a reliable way to accurately determine the real OS. The recommended approach is to use the [version helper APIs](https://docs.microsoft.com/en-us/windows/win32/sysinfo/version-helper-apis).
**Describe the solution you'd like**
MSAL.NET should follow the recommendations in the docs to more accurately report the OS to avoid confusing downstream effects.
**Describe alternatives you've considered**
None
**Additional context**
ping anyoung@microsoft for more context
",1,provide more accurate windows os information in client info headers is your feature request related to a problem please describe this feature request is in response to a microsoft internal customer reported the customer noticed that when making requests on windows server in the sign in activity logs they would see os windows instead after internal discussion it was determined this was due to msal net using environment osversion tostring to report the os version which returns the exact same value for and environment osversion is also not a reliable way to accurately determine the real os the recommended approach is to use the describe the solution you d like msal net should follow the recommendations in the docs to more accurately report the os to avoid confusing downstream effects describe alternatives you ve considered none additional context ping anyoung microsoft for more context ,1
109978,9422121615.0,IssuesEvent,2019-04-11 08:38:52,Microsoft/AzureStorageExplorer,https://api.github.com/repos/Microsoft/AzureStorageExplorer,reopened,‘Execute Query' button isn't themed in Query document editor under Dark theme,:gear: cosmosdb :gear: theming :heavy_check_mark: merged 🧪 testing,"**Storage Explorer Version:** 20180425.3(1.0.0)
**OS Version:** Win10/Linux/Mac
**Regression**: Not a regression
**Steps to Reproduce:**
1. Launch Storage Explorer and expand Cosmos DB Accounts.
2. Change the current theme to Dark.
3. Right click one document and select 'Open Query Tab'.
4. Check the ‘Execute Query' button.
**Expected Experience:**
‘Execute Query' button is themed.
**Actual Experience:**
‘Execute Query' button isn't themed.

**More info:**
This issue also reproduces on HC-Black theme.",1.0,"‘Execute Query' button isn't themed in Query document editor under Dark theme - **Storage Explorer Version:** 20180425.3(1.0.0)
**OS Version:** Win10/Linux/Mac
**Regression**: Not a regression
**Steps to Reproduce:**
1. Launch Storage Explorer and expand Cosmos DB Accounts.
2. Change the current theme to Dark.
3. Right click one document and select 'Open Query Tab'.
4. Check the ‘Execute Query' button.
**Expected Experience:**
‘Execute Query' button is themed.
**Actual Experience:**
‘Execute Query' button isn't themed.

**More info:**
This issue also reproduces on HC-Black theme.",0,‘execute query button isn t themed in query document editor under dark theme storage explorer version os version linux mac regression not a regression steps to reproduce launch storage explorer and expand cosmos db accounts change the current theme to dark right click one document and select open query tab check the ‘execute query button expected experience ‘execute query button is themed actual experience ‘execute query button isn t themed more info this issue also reproduces on hc black theme ,0
1958,30645392868.0,IssuesEvent,2023-07-25 03:56:26,thorvg/thorvg,https://api.github.com/repos/thorvg/thorvg,closed,warnings 'comparison of integer expressions of different signedness',portability,"on Windows (msys2 mingw-w64):
```
[30/173] Compiling C++ object src/libthorvg.a.p/lib_sw_engine_tvgSwRaster.cpp.obj
../src/lib/sw_engine/tvgSwRaster.cpp: In function 'void _rasterMaskedRectInt(SwSurface*, const SwBBox&, uint8_t, uint8_t, uint8_t, uint8_t)':
../src/lib/sw_engine/tvgSwRaster.cpp:312:58: warning: comparison of integer expressions of different signedness: 'uint32_t' {aka 'unsigned int'} and 'SwCoord' {
aka 'long int'} [-Wsign-compare]
312 | for (uint32_t y = surface->compositor->bbox.min.y; y < surface->compositor->bbox.max.y; ++y) {
| ~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../src/lib/sw_engine/tvgSwRaster.cpp:314:15: warning: comparison of integer expressions of different signedness: 'uint32_t' {aka 'unsigned int'} and 'const SwCo
ord' {aka 'const long int'} [-Wsign-compare]
314 | if (y == region.min.y) {
| ~~^~~~~~~~~~~~~~~
../src/lib/sw_engine/tvgSwRaster.cpp:315:38: warning: comparison of integer expressions of different signedness: 'uint32_t' {aka 'unsigned int'} and 'const SwCo
ord' {aka 'const long int'} [-Wsign-compare]
315 | for (uint32_t y2 = y; y2 < region.max.y; ++y2) {
| ~~~^~~~~~~~~~~~~~
../src/lib/sw_engine/tvgSwRaster.cpp: In function 'void _rasterMaskedRleInt(SwSurface*, SwRleData*, uint8_t, uint8_t, uint8_t, uint8_t)':
../src/lib/sw_engine/tvgSwRaster.cpp:502:58: warning: comparison of integer expressions of different signedness: 'uint32_t' {aka 'unsigned int'} and 'SwCoord' {
aka 'long int'} [-Wsign-compare]
502 | for (uint32_t y = surface->compositor->bbox.min.y; y < surface->compositor->bbox.max.y; ++y) {
| ~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../src/lib/sw_engine/tvgSwRaster.cpp:505:18: warning: comparison of integer expressions of different signedness: 'uint32_t' {aka 'unsigned int'} and 'SwCoord' {
aka 'long int'} [-Wsign-compare]
505 | while (x < surface->compositor->bbox.max.x) {
| ~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../src/lib/sw_engine/tvgSwRaster.cpp:506:63: warning: comparison of integer expressions of different signedness: 'uint32_t' {aka 'unsigned int'} and 'SwCoord' {
aka 'long int'} [-Wsign-compare]
506 | if (y == span->y && x == span->x && x + span->len <= surface->compositor->bbox.max.x) {
| ~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../src/lib/sw_engine/tvgSwRaster.cpp: In function 'void _rasterScaledMaskedRleImageInt(SwSurface*, const SwImage*, const tvg::Matrix*, const SwBBox&, uint8_t)':
../src/lib/sw_engine/tvgSwRaster.cpp:732:58: warning: comparison of integer expressions of different signedness: 'uint32_t' {aka 'unsigned int'} and 'SwCoord' {
aka 'long int'} [-Wsign-compare]
732 | for (uint32_t y = surface->compositor->bbox.min.y; y < surface->compositor->bbox.max.y; ++y) {
| ~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../src/lib/sw_engine/tvgSwRaster.cpp:734:62: warning: comparison of integer expressions of different signedness: 'uint32_t' {aka 'unsigned int'} and 'SwCoord' {
aka 'long int'} [-Wsign-compare]
734 | for (uint32_t x = surface->compositor->bbox.min.x; x < surface->compositor->bbox.max.x; ++x) {
| ~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../src/lib/sw_engine/tvgSwRaster.cpp:735:63: warning: comparison of integer expressions of different signedness: 'uint32_t' {aka 'unsigned int'} and 'SwCoord' {
aka 'long int'} [-Wsign-compare]
735 | if (y == span->y && x == span->x && x + span->len <= surface->compositor->bbox.max.x) {
| ~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../src/lib/sw_engine/tvgSwRaster.cpp: In function 'void _rasterDirectMaskedRleImageInt(SwSurface*, const SwImage*, uint8_t)':
../src/lib/sw_engine/tvgSwRaster.cpp:948:58: warning: comparison of integer expressions of different signedness: 'uint32_t' {aka 'unsigned int'} and 'SwCoord' {
aka 'long int'} [-Wsign-compare]
948 | for (uint32_t y = surface->compositor->bbox.min.y; y < surface->compositor->bbox.max.y; ++y) {
| ~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../src/lib/sw_engine/tvgSwRaster.cpp: In function 'void _rasterScaledMaskedImageInt(SwSurface*, const SwImage*, const tvg::Matrix*, const SwBBox&, uint8_t)':
../src/lib/sw_engine/tvgSwRaster.cpp:1159:58: warning: comparison of integer expressions of different signedness: 'uint32_t' {aka 'unsigned int'} and 'SwCoord'
{aka 'long int'} [-Wsign-compare]
1159 | for (uint32_t y = surface->compositor->bbox.min.y; y < surface->compositor->bbox.max.y; ++y) {
| ~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../src/lib/sw_engine/tvgSwRaster.cpp:1160:15: warning: comparison of integer expressions of different signedness: 'uint32_t' {aka 'unsigned int'} and 'const SwC
oord' {aka 'const long int'} [-Wsign-compare]
1160 | if (y == region.min.y) {
| ~~^~~~~~~~~~~~~~~
../src/lib/sw_engine/tvgSwRaster.cpp:1162:38: warning: comparison of integer expressions of different signedness: 'uint32_t' {aka 'unsigned int'} and 'const SwC
oord' {aka 'const long int'} [-Wsign-compare]
1162 | for (uint32_t y2 = y; y2 < region.max.y; ++y2) {
| ~~~^~~~~~~~~~~~~~
../src/lib/sw_engine/tvgSwRaster.cpp:1196:66: warning: comparison of integer expressions of different signedness: 'uint32_t' {aka 'unsigned int'} and 'SwCoord'
{aka 'long int'} [-Wsign-compare]
1196 | for (uint32_t x = surface->compositor->bbox.min.x; x < surface->compositor->bbox.max.x; ++x, ++tmp) {
| ~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../src/lib/sw_engine/tvgSwRaster.cpp: In function 'void _rasterDirectMaskedImageInt(SwSurface*, const SwImage*, const SwBBox&, uint8_t)':
../src/lib/sw_engine/tvgSwRaster.cpp:1387:58: warning: comparison of integer expressions of different signedness: 'uint32_t' {aka 'unsigned int'} and 'SwCoord'
{aka 'long int'} [-Wsign-compare]
1387 | for (uint32_t y = surface->compositor->bbox.min.y; y < surface->compositor->bbox.max.y; ++y) {
| ~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../src/lib/sw_engine/tvgSwRaster.cpp:1388:15: warning: comparison of integer expressions of different signedness: 'uint32_t' {aka 'unsigned int'} and 'const SwC
oord' {aka 'const long int'} [-Wsign-compare]
1388 | if (y == region.min.y) {
| ~~^~~~~~~~~~~~~~~
../src/lib/sw_engine/tvgSwRaster.cpp:1390:38: warning: comparison of integer expressions of different signedness: 'uint32_t' {aka 'unsigned int'} and 'const SwC
oord' {aka 'const long int'} [-Wsign-compare]
1390 | for (uint32_t y2 = y; y2 < region.max.y; ++y2) {
| ~~~^~~~~~~~~~~~~~
../src/lib/sw_engine/tvgSwRaster.cpp: In instantiation of 'void _rasterGradientMaskedRectInt(SwSurface*, const SwBBox&, const SwFill*) [with fillMethod = FillLi
near]':
../src/lib/sw_engine/tvgSwRaster.cpp:1625:96: required from 'bool _rasterGradientMaskedRect(SwSurface*, const SwBBox&, const SwFill*) [with fillMethod = FillL
inear]'
../src/lib/sw_engine/tvgSwRaster.cpp:1708:58: required from here
../src/lib/sw_engine/tvgSwRaster.cpp:1587:58: warning: comparison of integer expressions of different signedness: 'uint32_t' {aka 'unsigned int'} and 'SwCoord'
{aka 'long int'} [-Wsign-compare]
1587 | for (uint32_t y = surface->compositor->bbox.min.y; y < surface->compositor->bbox.max.y; ++y) {
| ~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../src/lib/sw_engine/tvgSwRaster.cpp:1589:15: warning: comparison of integer expressions of different signedness: 'uint32_t' {aka 'unsigned int'} and 'const SwC
oord' {aka 'const long int'} [-Wsign-compare]
1589 | if (y == region.min.y) {
| ~~^~~~~~~~~~~~~~~
../src/lib/sw_engine/tvgSwRaster.cpp:1590:38: warning: comparison of integer expressions of different signedness: 'uint32_t' {aka 'unsigned int'} and 'const SwC
oord' {aka 'const long int'} [-Wsign-compare]
1590 | for (uint32_t y2 = y; y2 < region.max.y; ++y2) {
| ~~~^~~~~~~~~~~~~~
../src/lib/sw_engine/tvgSwRaster.cpp: In instantiation of 'void _rasterGradientMaskedRectInt(SwSurface*, const SwBBox&, const SwFill*) [with fillMethod = FillRa
dial]':
../src/lib/sw_engine/tvgSwRaster.cpp:1625:96: required from 'bool _rasterGradientMaskedRect(SwSurface*, const SwBBox&, const SwFill*) [with fillMethod = FillR
adial]'
../src/lib/sw_engine/tvgSwRaster.cpp:1725:58: required from here
../src/lib/sw_engine/tvgSwRaster.cpp:1587:58: warning: comparison of integer expressions of different signedness: 'uint32_t' {aka 'unsigned int'} and 'SwCoord'
{aka 'long int'} [-Wsign-compare]
1587 | for (uint32_t y = surface->compositor->bbox.min.y; y < surface->compositor->bbox.max.y; ++y) {
| ~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../src/lib/sw_engine/tvgSwRaster.cpp:1589:15: warning: comparison of integer expressions of different signedness: 'uint32_t' {aka 'unsigned int'} and 'const SwC
oord' {aka 'const long int'} [-Wsign-compare]
1589 | if (y == region.min.y) {
| ~~^~~~~~~~~~~~~~~
../src/lib/sw_engine/tvgSwRaster.cpp:1590:38: warning: comparison of integer expressions of different signedness: 'uint32_t' {aka 'unsigned int'} and 'const SwC
oord' {aka 'const long int'} [-Wsign-compare]
1590 | for (uint32_t y2 = y; y2 < region.max.y; ++y2) {
| ~~~^~~~~~~~~~~~~~
../src/lib/sw_engine/tvgSwRaster.cpp: In instantiation of 'void _rasterGradientMaskedRleInt(SwSurface*, const SwRleData*, const SwFill*) [with fillMethod = Fill
Linear]':
../src/lib/sw_engine/tvgSwRaster.cpp:1789:95: required from 'bool _rasterGradientMaskedRle(SwSurface*, const SwRleData*, const SwFill*) [with fillMethod = Fil
lLinear]'
../src/lib/sw_engine/tvgSwRaster.cpp:1863:57: required from here
../src/lib/sw_engine/tvgSwRaster.cpp:1762:58: warning: comparison of integer expressions of different signedness: 'uint32_t' {aka 'unsigned int'} and 'SwCoord'
{aka 'long int'} [-Wsign-compare]
1762 | for (uint32_t y = surface->compositor->bbox.min.y; y < surface->compositor->bbox.max.y; ++y) {
| ~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../src/lib/sw_engine/tvgSwRaster.cpp:1765:18: warning: comparison of integer expressions of different signedness: 'uint32_t' {aka 'unsigned int'} and 'SwCoord'
{aka 'long int'} [-Wsign-compare]
1765 | while (x < surface->compositor->bbox.max.x) {
| ~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../src/lib/sw_engine/tvgSwRaster.cpp:1766:63: warning: comparison of integer expressions of different signedness: 'uint32_t' {aka 'unsigned int'} and 'SwCoord'
{aka 'long int'} [-Wsign-compare]
1766 | if (y == span->y && x == span->x && x + span->len <= surface->compositor->bbox.max.x) {
| ~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../src/lib/sw_engine/tvgSwRaster.cpp: In instantiation of 'void _rasterGradientMaskedRleInt(SwSurface*, const SwRleData*, const SwFill*) [with fillMethod = Fill
Radial]':
../src/lib/sw_engine/tvgSwRaster.cpp:1789:95: required from 'bool _rasterGradientMaskedRle(SwSurface*, const SwRleData*, const SwFill*) [with fillMethod = Fil
lRadial]'
../src/lib/sw_engine/tvgSwRaster.cpp:1880:57: required from here
../src/lib/sw_engine/tvgSwRaster.cpp:1762:58: warning: comparison of integer expressions of different signedness: 'uint32_t' {aka 'unsigned int'} and 'SwCoord'
{aka 'long int'} [-Wsign-compare]
1762 | for (uint32_t y = surface->compositor->bbox.min.y; y < surface->compositor->bbox.max.y; ++y) {
| ~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../src/lib/sw_engine/tvgSwRaster.cpp:1765:18: warning: comparison of integer expressions of different signedness: 'uint32_t' {aka 'unsigned int'} and 'SwCoord'
{aka 'long int'} [-Wsign-compare]
1765 | while (x < surface->compositor->bbox.max.x) {
| ~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../src/lib/sw_engine/tvgSwRaster.cpp:1766:63: warning: comparison of integer expressions of different signedness: 'uint32_t' {aka 'unsigned int'} and 'SwCoord'
{aka 'long int'} [-Wsign-compare]
1766 | if (y == span->y && x == span->x && x + span->len <= surface->compositor->bbox.max.x) {
| ~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
```",True,"warnings 'comparison of integer expressions of different signedness' - on Windows (msys2 mingw-w64):
```
[30/173] Compiling C++ object src/libthorvg.a.p/lib_sw_engine_tvgSwRaster.cpp.obj
../src/lib/sw_engine/tvgSwRaster.cpp: In function 'void _rasterMaskedRectInt(SwSurface*, const SwBBox&, uint8_t, uint8_t, uint8_t, uint8_t)':
../src/lib/sw_engine/tvgSwRaster.cpp:312:58: warning: comparison of integer expressions of different signedness: 'uint32_t' {aka 'unsigned int'} and 'SwCoord' {
aka 'long int'} [-Wsign-compare]
312 | for (uint32_t y = surface->compositor->bbox.min.y; y < surface->compositor->bbox.max.y; ++y) {
| ~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../src/lib/sw_engine/tvgSwRaster.cpp:314:15: warning: comparison of integer expressions of different signedness: 'uint32_t' {aka 'unsigned int'} and 'const SwCo
ord' {aka 'const long int'} [-Wsign-compare]
314 | if (y == region.min.y) {
| ~~^~~~~~~~~~~~~~~
../src/lib/sw_engine/tvgSwRaster.cpp:315:38: warning: comparison of integer expressions of different signedness: 'uint32_t' {aka 'unsigned int'} and 'const SwCo
ord' {aka 'const long int'} [-Wsign-compare]
315 | for (uint32_t y2 = y; y2 < region.max.y; ++y2) {
| ~~~^~~~~~~~~~~~~~
../src/lib/sw_engine/tvgSwRaster.cpp: In function 'void _rasterMaskedRleInt(SwSurface*, SwRleData*, uint8_t, uint8_t, uint8_t, uint8_t)':
../src/lib/sw_engine/tvgSwRaster.cpp:502:58: warning: comparison of integer expressions of different signedness: 'uint32_t' {aka 'unsigned int'} and 'SwCoord' {
aka 'long int'} [-Wsign-compare]
502 | for (uint32_t y = surface->compositor->bbox.min.y; y < surface->compositor->bbox.max.y; ++y) {
| ~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../src/lib/sw_engine/tvgSwRaster.cpp:505:18: warning: comparison of integer expressions of different signedness: 'uint32_t' {aka 'unsigned int'} and 'SwCoord' {
aka 'long int'} [-Wsign-compare]
505 | while (x < surface->compositor->bbox.max.x) {
| ~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../src/lib/sw_engine/tvgSwRaster.cpp:506:63: warning: comparison of integer expressions of different signedness: 'uint32_t' {aka 'unsigned int'} and 'SwCoord' {
aka 'long int'} [-Wsign-compare]
506 | if (y == span->y && x == span->x && x + span->len <= surface->compositor->bbox.max.x) {
| ~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../src/lib/sw_engine/tvgSwRaster.cpp: In function 'void _rasterScaledMaskedRleImageInt(SwSurface*, const SwImage*, const tvg::Matrix*, const SwBBox&, uint8_t)':
../src/lib/sw_engine/tvgSwRaster.cpp:732:58: warning: comparison of integer expressions of different signedness: 'uint32_t' {aka 'unsigned int'} and 'SwCoord' {
aka 'long int'} [-Wsign-compare]
732 | for (uint32_t y = surface->compositor->bbox.min.y; y < surface->compositor->bbox.max.y; ++y) {
| ~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../src/lib/sw_engine/tvgSwRaster.cpp:734:62: warning: comparison of integer expressions of different signedness: 'uint32_t' {aka 'unsigned int'} and 'SwCoord' {
aka 'long int'} [-Wsign-compare]
734 | for (uint32_t x = surface->compositor->bbox.min.x; x < surface->compositor->bbox.max.x; ++x) {
| ~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../src/lib/sw_engine/tvgSwRaster.cpp:735:63: warning: comparison of integer expressions of different signedness: 'uint32_t' {aka 'unsigned int'} and 'SwCoord' {
aka 'long int'} [-Wsign-compare]
735 | if (y == span->y && x == span->x && x + span->len <= surface->compositor->bbox.max.x) {
| ~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../src/lib/sw_engine/tvgSwRaster.cpp: In function 'void _rasterDirectMaskedRleImageInt(SwSurface*, const SwImage*, uint8_t)':
../src/lib/sw_engine/tvgSwRaster.cpp:948:58: warning: comparison of integer expressions of different signedness: 'uint32_t' {aka 'unsigned int'} and 'SwCoord' {
aka 'long int'} [-Wsign-compare]
948 | for (uint32_t y = surface->compositor->bbox.min.y; y < surface->compositor->bbox.max.y; ++y) {
| ~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../src/lib/sw_engine/tvgSwRaster.cpp: In function 'void _rasterScaledMaskedImageInt(SwSurface*, const SwImage*, const tvg::Matrix*, const SwBBox&, uint8_t)':
../src/lib/sw_engine/tvgSwRaster.cpp:1159:58: warning: comparison of integer expressions of different signedness: 'uint32_t' {aka 'unsigned int'} and 'SwCoord'
{aka 'long int'} [-Wsign-compare]
1159 | for (uint32_t y = surface->compositor->bbox.min.y; y < surface->compositor->bbox.max.y; ++y) {
| ~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../src/lib/sw_engine/tvgSwRaster.cpp:1160:15: warning: comparison of integer expressions of different signedness: 'uint32_t' {aka 'unsigned int'} and 'const SwC
oord' {aka 'const long int'} [-Wsign-compare]
1160 | if (y == region.min.y) {
| ~~^~~~~~~~~~~~~~~
../src/lib/sw_engine/tvgSwRaster.cpp:1162:38: warning: comparison of integer expressions of different signedness: 'uint32_t' {aka 'unsigned int'} and 'const SwC
oord' {aka 'const long int'} [-Wsign-compare]
1162 | for (uint32_t y2 = y; y2 < region.max.y; ++y2) {
| ~~~^~~~~~~~~~~~~~
../src/lib/sw_engine/tvgSwRaster.cpp:1196:66: warning: comparison of integer expressions of different signedness: 'uint32_t' {aka 'unsigned int'} and 'SwCoord'
{aka 'long int'} [-Wsign-compare]
1196 | for (uint32_t x = surface->compositor->bbox.min.x; x < surface->compositor->bbox.max.x; ++x, ++tmp) {
| ~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../src/lib/sw_engine/tvgSwRaster.cpp: In function 'void _rasterDirectMaskedImageInt(SwSurface*, const SwImage*, const SwBBox&, uint8_t)':
../src/lib/sw_engine/tvgSwRaster.cpp:1387:58: warning: comparison of integer expressions of different signedness: 'uint32_t' {aka 'unsigned int'} and 'SwCoord'
{aka 'long int'} [-Wsign-compare]
1387 | for (uint32_t y = surface->compositor->bbox.min.y; y < surface->compositor->bbox.max.y; ++y) {
| ~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../src/lib/sw_engine/tvgSwRaster.cpp:1388:15: warning: comparison of integer expressions of different signedness: 'uint32_t' {aka 'unsigned int'} and 'const SwC
oord' {aka 'const long int'} [-Wsign-compare]
1388 | if (y == region.min.y) {
| ~~^~~~~~~~~~~~~~~
../src/lib/sw_engine/tvgSwRaster.cpp:1390:38: warning: comparison of integer expressions of different signedness: 'uint32_t' {aka 'unsigned int'} and 'const SwC
oord' {aka 'const long int'} [-Wsign-compare]
1390 | for (uint32_t y2 = y; y2 < region.max.y; ++y2) {
| ~~~^~~~~~~~~~~~~~
../src/lib/sw_engine/tvgSwRaster.cpp: In instantiation of 'void _rasterGradientMaskedRectInt(SwSurface*, const SwBBox&, const SwFill*) [with fillMethod = FillLi
near]':
../src/lib/sw_engine/tvgSwRaster.cpp:1625:96: required from 'bool _rasterGradientMaskedRect(SwSurface*, const SwBBox&, const SwFill*) [with fillMethod = FillL
inear]'
../src/lib/sw_engine/tvgSwRaster.cpp:1708:58: required from here
../src/lib/sw_engine/tvgSwRaster.cpp:1587:58: warning: comparison of integer expressions of different signedness: 'uint32_t' {aka 'unsigned int'} and 'SwCoord'
{aka 'long int'} [-Wsign-compare]
1587 | for (uint32_t y = surface->compositor->bbox.min.y; y < surface->compositor->bbox.max.y; ++y) {
| ~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../src/lib/sw_engine/tvgSwRaster.cpp:1589:15: warning: comparison of integer expressions of different signedness: 'uint32_t' {aka 'unsigned int'} and 'const SwC
oord' {aka 'const long int'} [-Wsign-compare]
1589 | if (y == region.min.y) {
| ~~^~~~~~~~~~~~~~~
../src/lib/sw_engine/tvgSwRaster.cpp:1590:38: warning: comparison of integer expressions of different signedness: 'uint32_t' {aka 'unsigned int'} and 'const SwC
oord' {aka 'const long int'} [-Wsign-compare]
1590 | for (uint32_t y2 = y; y2 < region.max.y; ++y2) {
| ~~~^~~~~~~~~~~~~~
../src/lib/sw_engine/tvgSwRaster.cpp: In instantiation of 'void _rasterGradientMaskedRectInt(SwSurface*, const SwBBox&, const SwFill*) [with fillMethod = FillRa
dial]':
../src/lib/sw_engine/tvgSwRaster.cpp:1625:96: required from 'bool _rasterGradientMaskedRect(SwSurface*, const SwBBox&, const SwFill*) [with fillMethod = FillR
adial]'
../src/lib/sw_engine/tvgSwRaster.cpp:1725:58: required from here
../src/lib/sw_engine/tvgSwRaster.cpp:1587:58: warning: comparison of integer expressions of different signedness: 'uint32_t' {aka 'unsigned int'} and 'SwCoord'
{aka 'long int'} [-Wsign-compare]
1587 | for (uint32_t y = surface->compositor->bbox.min.y; y < surface->compositor->bbox.max.y; ++y) {
| ~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../src/lib/sw_engine/tvgSwRaster.cpp:1589:15: warning: comparison of integer expressions of different signedness: 'uint32_t' {aka 'unsigned int'} and 'const SwC
oord' {aka 'const long int'} [-Wsign-compare]
1589 | if (y == region.min.y) {
| ~~^~~~~~~~~~~~~~~
../src/lib/sw_engine/tvgSwRaster.cpp:1590:38: warning: comparison of integer expressions of different signedness: 'uint32_t' {aka 'unsigned int'} and 'const SwC
oord' {aka 'const long int'} [-Wsign-compare]
1590 | for (uint32_t y2 = y; y2 < region.max.y; ++y2) {
| ~~~^~~~~~~~~~~~~~
../src/lib/sw_engine/tvgSwRaster.cpp: In instantiation of 'void _rasterGradientMaskedRleInt(SwSurface*, const SwRleData*, const SwFill*) [with fillMethod = Fill
Linear]':
../src/lib/sw_engine/tvgSwRaster.cpp:1789:95: required from 'bool _rasterGradientMaskedRle(SwSurface*, const SwRleData*, const SwFill*) [with fillMethod = Fil
lLinear]'
../src/lib/sw_engine/tvgSwRaster.cpp:1863:57: required from here
../src/lib/sw_engine/tvgSwRaster.cpp:1762:58: warning: comparison of integer expressions of different signedness: 'uint32_t' {aka 'unsigned int'} and 'SwCoord'
{aka 'long int'} [-Wsign-compare]
1762 | for (uint32_t y = surface->compositor->bbox.min.y; y < surface->compositor->bbox.max.y; ++y) {
| ~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../src/lib/sw_engine/tvgSwRaster.cpp:1765:18: warning: comparison of integer expressions of different signedness: 'uint32_t' {aka 'unsigned int'} and 'SwCoord'
{aka 'long int'} [-Wsign-compare]
1765 | while (x < surface->compositor->bbox.max.x) {
| ~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../src/lib/sw_engine/tvgSwRaster.cpp:1766:63: warning: comparison of integer expressions of different signedness: 'uint32_t' {aka 'unsigned int'} and 'SwCoord'
{aka 'long int'} [-Wsign-compare]
1766 | if (y == span->y && x == span->x && x + span->len <= surface->compositor->bbox.max.x) {
| ~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../src/lib/sw_engine/tvgSwRaster.cpp: In instantiation of 'void _rasterGradientMaskedRleInt(SwSurface*, const SwRleData*, const SwFill*) [with fillMethod = Fill
Radial]':
../src/lib/sw_engine/tvgSwRaster.cpp:1789:95: required from 'bool _rasterGradientMaskedRle(SwSurface*, const SwRleData*, const SwFill*) [with fillMethod = Fil
lRadial]'
../src/lib/sw_engine/tvgSwRaster.cpp:1880:57: required from here
../src/lib/sw_engine/tvgSwRaster.cpp:1762:58: warning: comparison of integer expressions of different signedness: 'uint32_t' {aka 'unsigned int'} and 'SwCoord'
{aka 'long int'} [-Wsign-compare]
1762 | for (uint32_t y = surface->compositor->bbox.min.y; y < surface->compositor->bbox.max.y; ++y) {
| ~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../src/lib/sw_engine/tvgSwRaster.cpp:1765:18: warning: comparison of integer expressions of different signedness: 'uint32_t' {aka 'unsigned int'} and 'SwCoord'
{aka 'long int'} [-Wsign-compare]
1765 | while (x < surface->compositor->bbox.max.x) {
| ~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../src/lib/sw_engine/tvgSwRaster.cpp:1766:63: warning: comparison of integer expressions of different signedness: 'uint32_t' {aka 'unsigned int'} and 'SwCoord'
{aka 'long int'} [-Wsign-compare]
1766 | if (y == span->y && x == span->x && x + span->len <= surface->compositor->bbox.max.x) {
| ~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
```",1,warnings comparison of integer expressions of different signedness on windows mingw compiling c object src libthorvg a p lib sw engine tvgswraster cpp obj src lib sw engine tvgswraster cpp in function void rastermaskedrectint swsurface const swbbox t t t t src lib sw engine tvgswraster cpp warning comparison of integer expressions of different signedness t aka unsigned int and swcoord aka long int for t y surface compositor bbox min y y compositor bbox max y y src lib sw engine tvgswraster cpp warning comparison of integer expressions of different signedness t aka unsigned int and const swco ord aka const long int if y region min y src lib sw engine tvgswraster cpp warning comparison of integer expressions of different signedness t aka unsigned int and const swco ord aka const long int for t y region max y src lib sw engine tvgswraster cpp in function void rastermaskedrleint swsurface swrledata t t t t src lib sw engine tvgswraster cpp warning comparison of integer expressions of different signedness t aka unsigned int and swcoord aka long int for t y surface compositor bbox min y y compositor bbox max y y src lib sw engine tvgswraster cpp warning comparison of integer expressions of different signedness t aka unsigned int and swcoord aka long int while x compositor bbox max x src lib sw engine tvgswraster cpp warning comparison of integer expressions of different signedness t aka unsigned int and swcoord aka long int if y span y x span x x span len compositor bbox max x src lib sw engine tvgswraster cpp in function void rasterscaledmaskedrleimageint swsurface const swimage const tvg matrix const swbbox t src lib sw engine tvgswraster cpp warning comparison of integer expressions of different signedness t aka unsigned int and swcoord aka long int for t y surface compositor bbox min y y compositor bbox max y y src lib sw engine tvgswraster cpp warning comparison of integer expressions of different signedness t aka unsigned int and swcoord aka long int for t x surface compositor bbox min x x compositor bbox max x x src lib sw engine tvgswraster cpp warning comparison of integer expressions of different signedness t aka unsigned int and swcoord aka long int if y span y x span x x span len compositor bbox max x src lib sw engine tvgswraster cpp in function void rasterdirectmaskedrleimageint swsurface const swimage t src lib sw engine tvgswraster cpp warning comparison of integer expressions of different signedness t aka unsigned int and swcoord aka long int for t y surface compositor bbox min y y compositor bbox max y y src lib sw engine tvgswraster cpp in function void rasterscaledmaskedimageint swsurface const swimage const tvg matrix const swbbox t src lib sw engine tvgswraster cpp warning comparison of integer expressions of different signedness t aka unsigned int and swcoord aka long int for t y surface compositor bbox min y y compositor bbox max y y src lib sw engine tvgswraster cpp warning comparison of integer expressions of different signedness t aka unsigned int and const swc oord aka const long int if y region min y src lib sw engine tvgswraster cpp warning comparison of integer expressions of different signedness t aka unsigned int and const swc oord aka const long int for t y region max y src lib sw engine tvgswraster cpp warning comparison of integer expressions of different signedness t aka unsigned int and swcoord aka long int for t x surface compositor bbox min x x compositor bbox max x x tmp src lib sw engine tvgswraster cpp in function void rasterdirectmaskedimageint swsurface const swimage const swbbox t src lib sw engine tvgswraster cpp warning comparison of integer expressions of different signedness t aka unsigned int and swcoord aka long int for t y surface compositor bbox min y y compositor bbox max y y src lib sw engine tvgswraster cpp warning comparison of integer expressions of different signedness t aka unsigned int and const swc oord aka const long int if y region min y src lib sw engine tvgswraster cpp warning comparison of integer expressions of different signedness t aka unsigned int and const swc oord aka const long int for t y region max y src lib sw engine tvgswraster cpp in instantiation of void rastergradientmaskedrectint swsurface const swbbox const swfill with fillmethod fillli near src lib sw engine tvgswraster cpp required from bool rastergradientmaskedrect swsurface const swbbox const swfill with fillmethod filll inear src lib sw engine tvgswraster cpp required from here src lib sw engine tvgswraster cpp warning comparison of integer expressions of different signedness t aka unsigned int and swcoord aka long int for t y surface compositor bbox min y y compositor bbox max y y src lib sw engine tvgswraster cpp warning comparison of integer expressions of different signedness t aka unsigned int and const swc oord aka const long int if y region min y src lib sw engine tvgswraster cpp warning comparison of integer expressions of different signedness t aka unsigned int and const swc oord aka const long int for t y region max y src lib sw engine tvgswraster cpp in instantiation of void rastergradientmaskedrectint swsurface const swbbox const swfill with fillmethod fillra dial src lib sw engine tvgswraster cpp required from bool rastergradientmaskedrect swsurface const swbbox const swfill with fillmethod fillr adial src lib sw engine tvgswraster cpp required from here src lib sw engine tvgswraster cpp warning comparison of integer expressions of different signedness t aka unsigned int and swcoord aka long int for t y surface compositor bbox min y y compositor bbox max y y src lib sw engine tvgswraster cpp warning comparison of integer expressions of different signedness t aka unsigned int and const swc oord aka const long int if y region min y src lib sw engine tvgswraster cpp warning comparison of integer expressions of different signedness t aka unsigned int and const swc oord aka const long int for t y region max y src lib sw engine tvgswraster cpp in instantiation of void rastergradientmaskedrleint swsurface const swrledata const swfill with fillmethod fill linear src lib sw engine tvgswraster cpp required from bool rastergradientmaskedrle swsurface const swrledata const swfill with fillmethod fil llinear src lib sw engine tvgswraster cpp required from here src lib sw engine tvgswraster cpp warning comparison of integer expressions of different signedness t aka unsigned int and swcoord aka long int for t y surface compositor bbox min y y compositor bbox max y y src lib sw engine tvgswraster cpp warning comparison of integer expressions of different signedness t aka unsigned int and swcoord aka long int while x compositor bbox max x src lib sw engine tvgswraster cpp warning comparison of integer expressions of different signedness t aka unsigned int and swcoord aka long int if y span y x span x x span len compositor bbox max x src lib sw engine tvgswraster cpp in instantiation of void rastergradientmaskedrleint swsurface const swrledata const swfill with fillmethod fill radial src lib sw engine tvgswraster cpp required from bool rastergradientmaskedrle swsurface const swrledata const swfill with fillmethod fil lradial src lib sw engine tvgswraster cpp required from here src lib sw engine tvgswraster cpp warning comparison of integer expressions of different signedness t aka unsigned int and swcoord aka long int for t y surface compositor bbox min y y compositor bbox max y y src lib sw engine tvgswraster cpp warning comparison of integer expressions of different signedness t aka unsigned int and swcoord aka long int while x compositor bbox max x src lib sw engine tvgswraster cpp warning comparison of integer expressions of different signedness t aka unsigned int and swcoord aka long int if y span y x span x x span len compositor bbox max x ,1
26356,20023344677.0,IssuesEvent,2022-02-01 18:30:37,microsoft/WindowsAppSDK,https://api.github.com/repos/microsoft/WindowsAppSDK,closed,Files are copied twice into application package,bug area-Infrastructure,"**Describe the bug**
The packaging project copies files twice into the package output folder. One time into the root directory and one time into a subdirectory
**Steps to reproduce the bug**
Steps to reproduce the behavior:
1. Create a new c# winui desktop project
2. Build and deploy the package
3. Observe that multiple files get copied twice into the AppX output directory and in the subfolder with the application name (e.g. Microsoft.ui.xaml.dll). Same issue can be observed when building an msix package
4. Adding a file (e.g. a text file) to the project, setting its type to ""content"" and enabling the ""copy to output folder"" option results in the same issue
**Expected behavior**
Files should only be copied once. Preferably, they should be copied into the root directory of the package folder to avoid differences to future unpackaged deployment.
**Screenshots**
**Version Info**
NuGet package version:
[Microsoft.ProjectReunion 0.5.0-prerelease]
| Windows 10 version | Saw the problem? |
| :--------------------------------- | :-------------------- |
| Insider Build (xxxxx) | |
| May 2020 Update (19041) | Yes |
| November 2019 Update (18363) | |
| May 2019 Update (18362) | |
| October 2018 Update (17763) | |
**Additional context**
http://task.ms/37784910
",1.0,"Files are copied twice into application package - **Describe the bug**
The packaging project copies files twice into the package output folder. One time into the root directory and one time into a subdirectory
**Steps to reproduce the bug**
Steps to reproduce the behavior:
1. Create a new c# winui desktop project
2. Build and deploy the package
3. Observe that multiple files get copied twice into the AppX output directory and in the subfolder with the application name (e.g. Microsoft.ui.xaml.dll). Same issue can be observed when building an msix package
4. Adding a file (e.g. a text file) to the project, setting its type to ""content"" and enabling the ""copy to output folder"" option results in the same issue
**Expected behavior**
Files should only be copied once. Preferably, they should be copied into the root directory of the package folder to avoid differences to future unpackaged deployment.
**Screenshots**
**Version Info**
NuGet package version:
[Microsoft.ProjectReunion 0.5.0-prerelease]
| Windows 10 version | Saw the problem? |
| :--------------------------------- | :-------------------- |
| Insider Build (xxxxx) | |
| May 2020 Update (19041) | Yes |
| November 2019 Update (18363) | |
| May 2019 Update (18362) | |
| October 2018 Update (17763) | |
**Additional context**
http://task.ms/37784910
",0,files are copied twice into application package describe the bug the packaging project copies files twice into the package output folder one time into the root directory and one time into a subdirectory steps to reproduce the bug steps to reproduce the behavior create a new c winui desktop project build and deploy the package observe that multiple files get copied twice into the appx output directory and in the subfolder with the application name e g microsoft ui xaml dll same issue can be observed when building an msix package adding a file e g a text file to the project setting its type to content and enabling the copy to output folder option results in the same issue expected behavior files should only be copied once preferably they should be copied into the root directory of the package folder to avoid differences to future unpackaged deployment screenshots version info nuget package version windows version saw the problem insider build xxxxx may update yes november update may update october update additional context ,0
262970,19849452683.0,IssuesEvent,2022-01-21 10:36:13,timoast/signac,https://api.github.com/repos/timoast/signac,opened,CreateChromatinAssay,documentation,"Hey,
like in issue #937 I am still trying to create a chromatin assay. I checked that I am using now using the count matrix as input. My new row- and colnames are the following
`head(rownames(ATAC_subset_S1D1$X))
head(colnames(ATAC_subset_S1D1$X))`
`'TAGTTGTCACCCTCAC-1-s1d1''CTATGGCCATAACGGG-1-s1d1''CCGCACACAGGTTAAA-1-s1d1''TCATTTGGTAATGGAA-1-s1d1''ACCACATAGGTGTCCA-1-s1d1''TGGATTGGTTTGCGAA-1-s1d1'
'chr1-9776-10668''chr1-180726-181005''chr1-181117-181803''chr1-191133-192055''chr1-267562-268456''chr1-629497-630394'`
But I am still getting the same error when creating the chromatin assay
`ATAC_Seu<-CreateChromatinAssay(counts=ATAC_subset_S1D1$X)`
```
Error in .get_data_frame_col_as_numeric(df, granges_cols[[""end""]]): some values in the ""end"" column cannot be turned into numeric values
Traceback:
1. CreateChromatinAssay(counts = ATAC_subset_S1D1$X)
2. StringToGRanges(regions = rownames(x = data.use), sep = sep)
3. makeGRangesFromDataFrame(df = ranges.df, ...)
4. .get_data_frame_col_as_numeric(df, granges_cols[[""end""]])
5. stop(wmsg(""some values in the "", ""\"""", names(df)[[col]], ""\"" "",
. ""column cannot be turned into numeric values""))
```
I would really appreciate your help again.
",1.0,"CreateChromatinAssay - Hey,
like in issue #937 I am still trying to create a chromatin assay. I checked that I am using now using the count matrix as input. My new row- and colnames are the following
`head(rownames(ATAC_subset_S1D1$X))
head(colnames(ATAC_subset_S1D1$X))`
`'TAGTTGTCACCCTCAC-1-s1d1''CTATGGCCATAACGGG-1-s1d1''CCGCACACAGGTTAAA-1-s1d1''TCATTTGGTAATGGAA-1-s1d1''ACCACATAGGTGTCCA-1-s1d1''TGGATTGGTTTGCGAA-1-s1d1'
'chr1-9776-10668''chr1-180726-181005''chr1-181117-181803''chr1-191133-192055''chr1-267562-268456''chr1-629497-630394'`
But I am still getting the same error when creating the chromatin assay
`ATAC_Seu<-CreateChromatinAssay(counts=ATAC_subset_S1D1$X)`
```
Error in .get_data_frame_col_as_numeric(df, granges_cols[[""end""]]): some values in the ""end"" column cannot be turned into numeric values
Traceback:
1. CreateChromatinAssay(counts = ATAC_subset_S1D1$X)
2. StringToGRanges(regions = rownames(x = data.use), sep = sep)
3. makeGRangesFromDataFrame(df = ranges.df, ...)
4. .get_data_frame_col_as_numeric(df, granges_cols[[""end""]])
5. stop(wmsg(""some values in the "", ""\"""", names(df)[[col]], ""\"" "",
. ""column cannot be turned into numeric values""))
```
I would really appreciate your help again.
",0,createchromatinassay hey like in issue i am still trying to create a chromatin assay i checked that i am using now using the count matrix as input my new row and colnames are the following head rownames atac subset x head colnames atac subset x tagttgtcaccctcac ctatggccataacggg ccgcacacaggttaaa tcatttggtaatggaa accacataggtgtcca tggattggtttgcgaa but i am still getting the same error when creating the chromatin assay atac seu createchromatinassay counts atac subset x error in get data frame col as numeric df granges cols some values in the end column cannot be turned into numeric values traceback createchromatinassay counts atac subset x stringtogranges regions rownames x data use sep sep makegrangesfromdataframe df ranges df get data frame col as numeric df granges cols stop wmsg some values in the names df column cannot be turned into numeric values i would really appreciate your help again ,0
282179,8704290844.0,IssuesEvent,2018-12-05 18:59:52,AICrowd/AIcrowd,https://api.github.com/repos/AICrowd/AIcrowd,closed,Drafts challenges are publicly visible (no access check done),high priority,"_From @spMohanty on April 26, 2018 15:39_
https://www.crowdai.org/challenges/marlo-2018
_Copied from original issue: crowdAI/crowdai#724_",1.0,"Drafts challenges are publicly visible (no access check done) - _From @spMohanty on April 26, 2018 15:39_
https://www.crowdai.org/challenges/marlo-2018
_Copied from original issue: crowdAI/crowdai#724_",0,drafts challenges are publicly visible no access check done from spmohanty on april copied from original issue crowdai crowdai ,0
1670,24147474783.0,IssuesEvent,2022-09-21 20:11:31,golang/vulndb,https://api.github.com/repos/golang/vulndb,closed,x/vulndb: potential Go vuln in github.com/ouqiang/gocron: CVE-2022-40365,NeedsTriage excluded: NOT_IMPORTABLE,"CVE-2022-40365 references [github.com/ouqiang/gocron](https://github.com/ouqiang/gocron), which may be a Go module.
Description:
Cross site scripting (XSS) vulnerability in ouqiang gocron through 1.5.3, allows attackers to execute arbitrary code via scope.row.hostname in web/vue/src/pages/taskLog/list.vue.
References:
- NIST: https://nvd.nist.gov/vuln/detail/CVE-2022-40365
- JSON: https://github.com/CVEProject/cvelist/tree/15da3e2a1b5e0c625987f28c5c7cbd7af9a2e5e8/2022/40xxx/CVE-2022-40365.json
- web: https://github.com/ouqiang/gocron
- web: https://github.com/ouqiang/gocron/issues/362
- Imported by: https://pkg.go.dev/github.com/ouqiang/gocron?tab=importedby
See [doc/triage.md](https://github.com/golang/vulndb/blob/master/doc/triage.md) for instructions on how to triage this report.
```
modules:
- module: github.com/ouqiang/gocron
packages:
- package: n/a
description: |
Cross site scripting (XSS) vulnerability in ouqiang gocron through 1.5.3, allows attackers to execute arbitrary code via scope.row.hostname in web/vue/src/pages/taskLog/list.vue.
cves:
- CVE-2022-40365
references:
- web: https://github.com/ouqiang/gocron
- web: https://github.com/ouqiang/gocron/issues/362
```",True,"x/vulndb: potential Go vuln in github.com/ouqiang/gocron: CVE-2022-40365 - CVE-2022-40365 references [github.com/ouqiang/gocron](https://github.com/ouqiang/gocron), which may be a Go module.
Description:
Cross site scripting (XSS) vulnerability in ouqiang gocron through 1.5.3, allows attackers to execute arbitrary code via scope.row.hostname in web/vue/src/pages/taskLog/list.vue.
References:
- NIST: https://nvd.nist.gov/vuln/detail/CVE-2022-40365
- JSON: https://github.com/CVEProject/cvelist/tree/15da3e2a1b5e0c625987f28c5c7cbd7af9a2e5e8/2022/40xxx/CVE-2022-40365.json
- web: https://github.com/ouqiang/gocron
- web: https://github.com/ouqiang/gocron/issues/362
- Imported by: https://pkg.go.dev/github.com/ouqiang/gocron?tab=importedby
See [doc/triage.md](https://github.com/golang/vulndb/blob/master/doc/triage.md) for instructions on how to triage this report.
```
modules:
- module: github.com/ouqiang/gocron
packages:
- package: n/a
description: |
Cross site scripting (XSS) vulnerability in ouqiang gocron through 1.5.3, allows attackers to execute arbitrary code via scope.row.hostname in web/vue/src/pages/taskLog/list.vue.
cves:
- CVE-2022-40365
references:
- web: https://github.com/ouqiang/gocron
- web: https://github.com/ouqiang/gocron/issues/362
```",1,x vulndb potential go vuln in github com ouqiang gocron cve cve references which may be a go module description cross site scripting xss vulnerability in ouqiang gocron through allows attackers to execute arbitrary code via scope row hostname in web vue src pages tasklog list vue references nist json web web imported by see for instructions on how to triage this report modules module github com ouqiang gocron packages package n a description cross site scripting xss vulnerability in ouqiang gocron through allows attackers to execute arbitrary code via scope row hostname in web vue src pages tasklog list vue cves cve references web web ,1
778,10277090780.0,IssuesEvent,2019-08-25 00:21:16,SanderMertens/flecs,https://api.github.com/repos/SanderMertens/flecs,closed,Remove `bake.utils` dependency,portability,"* Substitute all `ut_*` function with `ecs_os_api_*` callbacks.
* Move [bake.utils](https://github.com/SanderMertens/flecs/blob/master/src/os_api.c#L21) to separate `flecs.bake` module. It will setup os api within module import function.
* Remove `bake.utils` dependency from all modules use os api instead.",True,"Remove `bake.utils` dependency - * Substitute all `ut_*` function with `ecs_os_api_*` callbacks.
* Move [bake.utils](https://github.com/SanderMertens/flecs/blob/master/src/os_api.c#L21) to separate `flecs.bake` module. It will setup os api within module import function.
* Remove `bake.utils` dependency from all modules use os api instead.",1,remove bake utils dependency substitute all ut function with ecs os api callbacks move to separate flecs bake module it will setup os api within module import function remove bake utils dependency from all modules use os api instead ,1
146132,19393879226.0,IssuesEvent,2021-12-18 01:23:29,mgh3326/studyolle,https://api.github.com/repos/mgh3326/studyolle,opened,CVE-2021-42550 (Medium) detected in logback-classic-1.2.3.jar,security vulnerability,"## CVE-2021-42550 - Medium Severity Vulnerability
Vulnerable Library - logback-classic-1.2.3.jar
logback-classic module
Library home page: http://logback.qos.ch
Path to dependency file: studyolle/pom.xml
Path to vulnerable library: /home/wss-scanner/.m2/repository/ch/qos/logback/logback-classic/1.2.3/logback-classic-1.2.3.jar
Dependency Hierarchy:
- spring-boot-starter-mail-2.3.3.RELEASE.jar (Root Library)
- spring-boot-starter-2.3.3.RELEASE.jar
- spring-boot-starter-logging-2.3.3.RELEASE.jar
- :x: **logback-classic-1.2.3.jar** (Vulnerable Library)
Found in base branch: master
Vulnerability Details
In logback version 1.2.7 and prior versions, an attacker with the required privileges to edit configurations files could craft a malicious configuration allowing to execute arbitrary code loaded from LDAP servers.
Publish Date: 2021-12-16
URL: CVE-2021-42550
CVSS 3 Score Details (6.6 )
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
For more information on CVSS3 Scores, click here .
Suggested Fix
Type: Upgrade version
Origin: http://logback.qos.ch/news.html
Release Date: 2021-12-16
Fix Resolution: ch.qos.logback:logback-classic:1.2.8
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2021-42550 (Medium) detected in logback-classic-1.2.3.jar - ## CVE-2021-42550 - Medium Severity Vulnerability
Vulnerable Library - logback-classic-1.2.3.jar
logback-classic module
Library home page: http://logback.qos.ch
Path to dependency file: studyolle/pom.xml
Path to vulnerable library: /home/wss-scanner/.m2/repository/ch/qos/logback/logback-classic/1.2.3/logback-classic-1.2.3.jar
Dependency Hierarchy:
- spring-boot-starter-mail-2.3.3.RELEASE.jar (Root Library)
- spring-boot-starter-2.3.3.RELEASE.jar
- spring-boot-starter-logging-2.3.3.RELEASE.jar
- :x: **logback-classic-1.2.3.jar** (Vulnerable Library)
Found in base branch: master
Vulnerability Details
In logback version 1.2.7 and prior versions, an attacker with the required privileges to edit configurations files could craft a malicious configuration allowing to execute arbitrary code loaded from LDAP servers.
Publish Date: 2021-12-16
URL: CVE-2021-42550
CVSS 3 Score Details (6.6 )
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
For more information on CVSS3 Scores, click here .
Suggested Fix
Type: Upgrade version
Origin: http://logback.qos.ch/news.html
Release Date: 2021-12-16
Fix Resolution: ch.qos.logback:logback-classic:1.2.8
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve medium detected in logback classic jar cve medium severity vulnerability vulnerable library logback classic jar logback classic module library home page a href path to dependency file studyolle pom xml path to vulnerable library home wss scanner repository ch qos logback logback classic logback classic jar dependency hierarchy spring boot starter mail release jar root library spring boot starter release jar spring boot starter logging release jar x logback classic jar vulnerable library found in base branch master vulnerability details in logback version and prior versions an attacker with the required privileges to edit configurations files could craft a malicious configuration allowing to execute arbitrary code loaded from ldap servers publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required high user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution ch qos logback logback classic step up your open source security game with whitesource ,0
4252,5009035361.0,IssuesEvent,2016-12-12 21:13:40,decred/dcrwallet,https://api.github.com/repos/decred/dcrwallet,closed,"--profile should take a listen address, not a port.",breaking-change feature-addition security,"It should be consistent with --rpclisten and --experimentalrpclisten. Additionally, I consider listening on all interfaces by default a security concern.",True,"--profile should take a listen address, not a port. - It should be consistent with --rpclisten and --experimentalrpclisten. Additionally, I consider listening on all interfaces by default a security concern.",0, profile should take a listen address not a port it should be consistent with rpclisten and experimentalrpclisten additionally i consider listening on all interfaces by default a security concern ,0
64070,6892212211.0,IssuesEvent,2017-11-22 20:02:48,golang/go,https://api.github.com/repos/golang/go,opened,x/net/http2: stderr spam from running tests,HelpWanted NeedsFix Testing,"Tests shouldn't spam to stderr:
```
$ go test -short
2017/11/22 20:01:28 protocol error: received DATA on a HEAD request
2017/11/22 20:01:31 protocol error: received DATA before a HEADERS frame
PASS
ok golang.org/x/net/http2 5.150s
```
From:
```
=== RUN TestTransportReadHeadResponseWithBody
2017/11/22 20:00:08 protocol error: received DATA on a HEAD request
--- PASS: TestTransportReadHeadResponseWithBody (0.00s)
...
=== RUN TestTransportResponseDataBeforeHeaders
2017/11/22 20:01:54 protocol error: received DATA before a HEADERS frame
--- PASS: TestTransportResponseDataBeforeHeaders (0.00s)
```
/cc @tombergan ",1.0,"x/net/http2: stderr spam from running tests - Tests shouldn't spam to stderr:
```
$ go test -short
2017/11/22 20:01:28 protocol error: received DATA on a HEAD request
2017/11/22 20:01:31 protocol error: received DATA before a HEADERS frame
PASS
ok golang.org/x/net/http2 5.150s
```
From:
```
=== RUN TestTransportReadHeadResponseWithBody
2017/11/22 20:00:08 protocol error: received DATA on a HEAD request
--- PASS: TestTransportReadHeadResponseWithBody (0.00s)
...
=== RUN TestTransportResponseDataBeforeHeaders
2017/11/22 20:01:54 protocol error: received DATA before a HEADERS frame
--- PASS: TestTransportResponseDataBeforeHeaders (0.00s)
```
/cc @tombergan ",0,x net stderr spam from running tests tests shouldn t spam to stderr go test short protocol error received data on a head request protocol error received data before a headers frame pass ok golang org x net from run testtransportreadheadresponsewithbody protocol error received data on a head request pass testtransportreadheadresponsewithbody run testtransportresponsedatabeforeheaders protocol error received data before a headers frame pass testtransportresponsedatabeforeheaders cc tombergan ,0
1097,14013948414.0,IssuesEvent,2020-10-29 11:09:54,Alistair-Bell/Mage-Engine,https://api.github.com/repos/Alistair-Bell/Mage-Engine,closed,Failing to call premake on windows,bug portability,"Tried on windows machine, had problems with generating vs projects",True,"Failing to call premake on windows - Tried on windows machine, had problems with generating vs projects",1,failing to call premake on windows tried on windows machine had problems with generating vs projects,1
257675,27563807527.0,IssuesEvent,2023-03-08 01:07:55,LynRodWS/alcor,https://api.github.com/repos/LynRodWS/alcor,opened,CVE-2020-36188 (High) detected in jackson-databind-2.9.9.jar,security vulnerability,"## CVE-2020-36188 - High Severity Vulnerability
Vulnerable Library - jackson-databind-2.9.9.jar
General data-binding functionality for Jackson: works on core streaming API
Library home page: http://github.com/FasterXML/jackson
Path to dependency file: /services/api_gateway/pom.xml
Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9/jackson-databind-2.9.9.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9/jackson-databind-2.9.9.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9/jackson-databind-2.9.9.jar
Dependency Hierarchy:
- spring-cloud-starter-netflix-hystrix-2.1.2.RELEASE.jar (Root Library)
- hystrix-serialization-1.5.18.jar
- :x: **jackson-databind-2.9.9.jar** (Vulnerable Library)
Found in base branch: master
Vulnerability Details
FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to com.newrelic.agent.deps.ch.qos.logback.core.db.JNDIConnectionSource.
Publish Date: 2021-01-06
URL: CVE-2020-36188
CVSS 3 Score Details (8.1 )
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
For more information on CVSS3 Scores, click here .
Suggested Fix
Type: Upgrade version
Release Date: 2021-01-06
Fix Resolution (com.fasterxml.jackson.core:jackson-databind): 2.9.10.8
Direct dependency fix Resolution (org.springframework.cloud:spring-cloud-starter-netflix-hystrix): 2.1.3.RELEASE
***
:rescue_worker_helmet: Automatic Remediation is available for this issue",True,"CVE-2020-36188 (High) detected in jackson-databind-2.9.9.jar - ## CVE-2020-36188 - High Severity Vulnerability
Vulnerable Library - jackson-databind-2.9.9.jar
General data-binding functionality for Jackson: works on core streaming API
Library home page: http://github.com/FasterXML/jackson
Path to dependency file: /services/api_gateway/pom.xml
Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9/jackson-databind-2.9.9.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9/jackson-databind-2.9.9.jar,/home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9/jackson-databind-2.9.9.jar
Dependency Hierarchy:
- spring-cloud-starter-netflix-hystrix-2.1.2.RELEASE.jar (Root Library)
- hystrix-serialization-1.5.18.jar
- :x: **jackson-databind-2.9.9.jar** (Vulnerable Library)
Found in base branch: master
Vulnerability Details
FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to com.newrelic.agent.deps.ch.qos.logback.core.db.JNDIConnectionSource.
Publish Date: 2021-01-06
URL: CVE-2020-36188
CVSS 3 Score Details (8.1 )
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
For more information on CVSS3 Scores, click here .
Suggested Fix
Type: Upgrade version
Release Date: 2021-01-06
Fix Resolution (com.fasterxml.jackson.core:jackson-databind): 2.9.10.8
Direct dependency fix Resolution (org.springframework.cloud:spring-cloud-starter-netflix-hystrix): 2.1.3.RELEASE
***
:rescue_worker_helmet: Automatic Remediation is available for this issue",0,cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file services api gateway pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy spring cloud starter netflix hystrix release jar root library hystrix serialization jar x jackson databind jar vulnerable library found in base branch master vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to com newrelic agent deps ch qos logback core db jndiconnectionsource publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution com fasterxml jackson core jackson databind direct dependency fix resolution org springframework cloud spring cloud starter netflix hystrix release rescue worker helmet automatic remediation is available for this issue,0
578,7978013950.0,IssuesEvent,2018-07-17 16:58:51,chapel-lang/chapel,https://api.github.com/repos/chapel-lang/chapel,opened,Make Chapel installation easier,area: BTR type: Feature Request type: Portability,"Can we build and release binary packages ourselves and provide a one-liner to
do installation?
Today, our standard install requires downloading the tarball and building from
source. We have binary distributions for Crays (as a Cray module) and for Mac
(through homebrew) but we don't have straightforward binary installations for
*nix platforms.
We have put some effort into trying to get a Debian package:
- https://github.com/chapel-lang/chapel-packaging/tree/master/debian
- https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=880991
But the main disadvantages there are that Debian has pretty strict packing
rules (and we're very far from packaging experts.) This means we can't bundle
any of our third-party, so in addition to it being a lot of work on our end,
we'd still only end up with a minimal quickstart installation of Chapel, which
would only be useful for playing/experimenting but performance would be bad.
Additionally, since our performance and stability are improving pretty
dramatically with each release we want people using the latest release, which
is pretty hard to do with the lead time for updating official Debian packages
and having those trickle downstream to Ubuntu and other distros.
So for this issue -- can build and distribute our own binaries for platforms
and architectures that are most important to us and how can we make it simple
for users to install and update?
",True,"Make Chapel installation easier - Can we build and release binary packages ourselves and provide a one-liner to
do installation?
Today, our standard install requires downloading the tarball and building from
source. We have binary distributions for Crays (as a Cray module) and for Mac
(through homebrew) but we don't have straightforward binary installations for
*nix platforms.
We have put some effort into trying to get a Debian package:
- https://github.com/chapel-lang/chapel-packaging/tree/master/debian
- https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=880991
But the main disadvantages there are that Debian has pretty strict packing
rules (and we're very far from packaging experts.) This means we can't bundle
any of our third-party, so in addition to it being a lot of work on our end,
we'd still only end up with a minimal quickstart installation of Chapel, which
would only be useful for playing/experimenting but performance would be bad.
Additionally, since our performance and stability are improving pretty
dramatically with each release we want people using the latest release, which
is pretty hard to do with the lead time for updating official Debian packages
and having those trickle downstream to Ubuntu and other distros.
So for this issue -- can build and distribute our own binaries for platforms
and architectures that are most important to us and how can we make it simple
for users to install and update?
",1,make chapel installation easier can we build and release binary packages ourselves and provide a one liner to do installation today our standard install requires downloading the tarball and building from source we have binary distributions for crays as a cray module and for mac through homebrew but we don t have straightforward binary installations for nix platforms we have put some effort into trying to get a debian package but the main disadvantages there are that debian has pretty strict packing rules and we re very far from packaging experts this means we can t bundle any of our third party so in addition to it being a lot of work on our end we d still only end up with a minimal quickstart installation of chapel which would only be useful for playing experimenting but performance would be bad additionally since our performance and stability are improving pretty dramatically with each release we want people using the latest release which is pretty hard to do with the lead time for updating official debian packages and having those trickle downstream to ubuntu and other distros so for this issue can build and distribute our own binaries for platforms and architectures that are most important to us and how can we make it simple for users to install and update ,1
438915,30668851842.0,IssuesEvent,2023-07-25 20:32:34,jetstream-cloud/js2docs,https://api.github.com/repos/jetstream-cloud/js2docs,opened,[documentation] Add article for extending volume,documentation,"## Opportunity
This seems like a fairly common things users will want to do, and we get a good number of tickets asking how to do this. Here is an example of a ticket, including my reply:
https://access-ci.atlassian.net/browse/ATS-1987
## Resolution
We should add a page on the public docs for how to do this.",1.0,"[documentation] Add article for extending volume - ## Opportunity
This seems like a fairly common things users will want to do, and we get a good number of tickets asking how to do this. Here is an example of a ticket, including my reply:
https://access-ci.atlassian.net/browse/ATS-1987
## Resolution
We should add a page on the public docs for how to do this.",0, add article for extending volume opportunity this seems like a fairly common things users will want to do and we get a good number of tickets asking how to do this here is an example of a ticket including my reply resolution we should add a page on the public docs for how to do this ,0
661,8750938731.0,IssuesEvent,2018-12-13 20:44:11,Azure/azure-functions-host,https://api.github.com/repos/Azure/azure-functions-host,closed,Warning about async void functions does not have function name in logs,Supportability,"Found some log entries in our system logs that have a message that looks like this:
`Function 'RunAsync' is async but does not return a Task. Your function may not run correctly.`
There are two problems:
1. RunAsync is the method name, not the function name. So the log message needs to include the actual function name.
2. The FunctionName column is not populated.
We need this information to be able to display relevant dynamic help when customers start the support ticket flow.
",True,"Warning about async void functions does not have function name in logs - Found some log entries in our system logs that have a message that looks like this:
`Function 'RunAsync' is async but does not return a Task. Your function may not run correctly.`
There are two problems:
1. RunAsync is the method name, not the function name. So the log message needs to include the actual function name.
2. The FunctionName column is not populated.
We need this information to be able to display relevant dynamic help when customers start the support ticket flow.
",1,warning about async void functions does not have function name in logs found some log entries in our system logs that have a message that looks like this function runasync is async but does not return a task your function may not run correctly there are two problems runasync is the method name not the function name so the log message needs to include the actual function name the functionname column is not populated we need this information to be able to display relevant dynamic help when customers start the support ticket flow ,1
191,4016344047.0,IssuesEvent,2016-05-15 14:54:09,roc-project/roc,https://api.github.com/repos/roc-project/roc,opened,OpenMAX support,enhancement portability wishlist,"On Raspberry Pi, we can get advantage of OpenMAX support which provides API to hardware accelerated codecs and playback.",True,"OpenMAX support - On Raspberry Pi, we can get advantage of OpenMAX support which provides API to hardware accelerated codecs and playback.",1,openmax support on raspberry pi we can get advantage of openmax support which provides api to hardware accelerated codecs and playback ,1
1238,16507245509.0,IssuesEvent,2021-05-25 20:59:05,AzureAD/microsoft-identity-web,https://api.github.com/repos/AzureAD/microsoft-identity-web,closed,IDW10201: Neither scope or roles claim was found in the bearer token,Answered enhancement question supportability,"**Which version of Microsoft Identity Web are you using?**
Note that to get help, you need to run the latest version.
I added the nuget package yesterday so its the latest:
**Where is the issue?**
* Web app
* [ ] Sign-in users
* [ ] Sign-in users and call web APIs
* Web API
* [X] Protected web APIs (validating tokens)
* [ ] Protected web APIs (validating scopes)
* [ ] Protected web APIs call downstream web APIs
* Token cache serialization
* [ ] In-memory caches
* [ ] Session caches
* [ ] Distributed caches
* Other (please describe)
**Is this a new or an existing app?**
c. This is a new app or an experiment.
I have an app registration like this:
[![enter image description here][1]][1]
In my webappi my app settings.json: I have this:
{
""AzureAd"": {
""Instance"": ""https://login.microsoftonline.com/"",
""Domain"": ""xx.com.co"",
""TenantId"": ""xx-c220-48a2-a73f-1177fa2c098e"",
""ClientId"": ""xx-3737-48a5-a6c0-7e3bc4f9a5c9"",
""CallbackPath"": ""/signin-oidc"",
""Scopes"" : ""userimpersonation""
},
""Logging"": {
""LogLevel"": {
""Default"": ""Information"",
""Microsoft"": ""Warning"",
""Microsoft.Hosting.Lifetime"": ""Information""
}
},
""AllowedHosts"": ""*""
}
In my Startup.cs:
services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme)
.AddMicrosoftIdentityWebApi(Configuration.GetSection(""AzureAd""));
And in my controller:
[Authorize]
[RequiredScope(RequiredScopesConfigurationKey = ""AzureAd:Scopes"")]
[ApiController]
[Route(""api/[controller]"")]
public class WeatherForecastController : ControllerBase
{
So, I run thhe web app, and I get a token via postman:
curl --location --request POST 'https://login.microsoftonline.com/xx-c220-48a2-a73f-1177fa2c098e/oauth2/v2.0/token' \
--header 'Content-Type: application/x-www-form-urlencoded' \
--header 'Cookie: wlidperf=FR=L&ST=1526512036088; fpc=AnqPVmkUS_BIgf3y-QfBcFEzTZcDBQAAAKAv0dcOAAAA; stsservicecookie=ests; x-ms-gateway-slice=prod' \
--form 'grant_type=""client_credentials""' \
--form 'client_secret=""xx""' \
--form 'client_id=""xx-3737-48a5-a6c0-7e3bc4f9a5c9
""' \
--form 'scope=""api://xx-3737-48a5-a6c0-7e3bc4f9a5c9/.default
""'
That works fine, however if I channge the scope to: api://xx-3737-48a5-a6c0-7e3bc4f9a5c9/userimpersonation.
Then I get this error:
AADSTS70011: The provided request must include a 'scope' input parameter. The provided value for the input parameter 'scope' is not valid
If I use the token provided with the default scope, when I call my controller, I get the following error:
System.UnauthorizedAccessException: IDW10201: Neither scope or roles claim was found in the bearer token.
at Microsoft.Identity.Web.MicrosoftIdentityWebApiAuthenticationBuilderExtensions.<>c__DisplayClass3_1.<b__1>d.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at Microsoft.AspNetCore.Authentication.JwtBearer.JwtBearerHandler.HandleAuthenticateAsync()
at Microsoft.AspNetCore.Authentication.JwtBearer.JwtBearerHandler.HandleAuthenticateAsync()
at Microsoft.AspNetCore.Authentication.AuthenticationHandler`1.AuthenticateAsync()
at Microsoft.AspNetCore.Authentication.AuthenticationService.AuthenticateAsync(HttpContext context, String scheme)
at Microsoft.AspNetCore.Authentication.AuthenticationMiddleware.Invoke(HttpContext context)
at Swashbuckle.AspNetCore.SwaggerUI.SwaggerUIMiddleware.Invoke(HttpContext httpContext)
at Swashbuckle.AspNetCore.Swagger.SwaggerMiddleware.Invoke(HttpContext httpContext, ISwaggerProvider swaggerProvider)
at Microsoft.AspNetCore.Diagnostics.DeveloperExceptionPageMiddleware.Invoke(HttpContext context)
[1]: https://i.stack.imgur.com/Uicos.png
",True,"IDW10201: Neither scope or roles claim was found in the bearer token - **Which version of Microsoft Identity Web are you using?**
Note that to get help, you need to run the latest version.
I added the nuget package yesterday so its the latest:
**Where is the issue?**
* Web app
* [ ] Sign-in users
* [ ] Sign-in users and call web APIs
* Web API
* [X] Protected web APIs (validating tokens)
* [ ] Protected web APIs (validating scopes)
* [ ] Protected web APIs call downstream web APIs
* Token cache serialization
* [ ] In-memory caches
* [ ] Session caches
* [ ] Distributed caches
* Other (please describe)
**Is this a new or an existing app?**
c. This is a new app or an experiment.
I have an app registration like this:
[![enter image description here][1]][1]
In my webappi my app settings.json: I have this:
{
""AzureAd"": {
""Instance"": ""https://login.microsoftonline.com/"",
""Domain"": ""xx.com.co"",
""TenantId"": ""xx-c220-48a2-a73f-1177fa2c098e"",
""ClientId"": ""xx-3737-48a5-a6c0-7e3bc4f9a5c9"",
""CallbackPath"": ""/signin-oidc"",
""Scopes"" : ""userimpersonation""
},
""Logging"": {
""LogLevel"": {
""Default"": ""Information"",
""Microsoft"": ""Warning"",
""Microsoft.Hosting.Lifetime"": ""Information""
}
},
""AllowedHosts"": ""*""
}
In my Startup.cs:
services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme)
.AddMicrosoftIdentityWebApi(Configuration.GetSection(""AzureAd""));
And in my controller:
[Authorize]
[RequiredScope(RequiredScopesConfigurationKey = ""AzureAd:Scopes"")]
[ApiController]
[Route(""api/[controller]"")]
public class WeatherForecastController : ControllerBase
{
So, I run thhe web app, and I get a token via postman:
curl --location --request POST 'https://login.microsoftonline.com/xx-c220-48a2-a73f-1177fa2c098e/oauth2/v2.0/token' \
--header 'Content-Type: application/x-www-form-urlencoded' \
--header 'Cookie: wlidperf=FR=L&ST=1526512036088; fpc=AnqPVmkUS_BIgf3y-QfBcFEzTZcDBQAAAKAv0dcOAAAA; stsservicecookie=ests; x-ms-gateway-slice=prod' \
--form 'grant_type=""client_credentials""' \
--form 'client_secret=""xx""' \
--form 'client_id=""xx-3737-48a5-a6c0-7e3bc4f9a5c9
""' \
--form 'scope=""api://xx-3737-48a5-a6c0-7e3bc4f9a5c9/.default
""'
That works fine, however if I channge the scope to: api://xx-3737-48a5-a6c0-7e3bc4f9a5c9/userimpersonation.
Then I get this error:
AADSTS70011: The provided request must include a 'scope' input parameter. The provided value for the input parameter 'scope' is not valid
If I use the token provided with the default scope, when I call my controller, I get the following error:
System.UnauthorizedAccessException: IDW10201: Neither scope or roles claim was found in the bearer token.
at Microsoft.Identity.Web.MicrosoftIdentityWebApiAuthenticationBuilderExtensions.<>c__DisplayClass3_1.<b__1>d.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at Microsoft.AspNetCore.Authentication.JwtBearer.JwtBearerHandler.HandleAuthenticateAsync()
at Microsoft.AspNetCore.Authentication.JwtBearer.JwtBearerHandler.HandleAuthenticateAsync()
at Microsoft.AspNetCore.Authentication.AuthenticationHandler`1.AuthenticateAsync()
at Microsoft.AspNetCore.Authentication.AuthenticationService.AuthenticateAsync(HttpContext context, String scheme)
at Microsoft.AspNetCore.Authentication.AuthenticationMiddleware.Invoke(HttpContext context)
at Swashbuckle.AspNetCore.SwaggerUI.SwaggerUIMiddleware.Invoke(HttpContext httpContext)
at Swashbuckle.AspNetCore.Swagger.SwaggerMiddleware.Invoke(HttpContext httpContext, ISwaggerProvider swaggerProvider)
at Microsoft.AspNetCore.Diagnostics.DeveloperExceptionPageMiddleware.Invoke(HttpContext context)
[1]: https://i.stack.imgur.com/Uicos.png
",1, neither scope or roles claim was found in the bearer token which version of microsoft identity web are you using note that to get help you need to run the latest version i added the nuget package yesterday so its the latest where is the issue web app sign in users sign in users and call web apis web api protected web apis validating tokens protected web apis validating scopes protected web apis call downstream web apis token cache serialization in memory caches session caches distributed caches other please describe is this a new or an existing app c this is a new app or an experiment i have an app registration like this in my webappi my app settings json i have this azuread instance domain xx com co tenantid xx clientid xx callbackpath signin oidc scopes userimpersonation logging loglevel default information microsoft warning microsoft hosting lifetime information allowedhosts in my startup cs services addauthentication jwtbearerdefaults authenticationscheme addmicrosoftidentitywebapi configuration getsection azuread and in my controller public class weatherforecastcontroller controllerbase so i run thhe web app and i get a token via postman curl location request post header content type application x www form urlencoded header cookie wlidperf fr l st fpc anqpvmkus stsservicecookie ests x ms gateway slice prod form grant type client credentials form client secret xx form client id xx form scope api xx default that works fine however if i channge the scope to api xx userimpersonation then i get this error the provided request must include a scope input parameter the provided value for the input parameter scope is not valid if i use the token provided with the default scope when i call my controller i get the following error system unauthorizedaccessexception neither scope or roles claim was found in the bearer token at microsoft identity web microsoftidentitywebapiauthenticationbuilderextensions c b d movenext end of stack trace from previous location where exception was thrown at microsoft aspnetcore authentication jwtbearer jwtbearerhandler handleauthenticateasync at microsoft aspnetcore authentication jwtbearer jwtbearerhandler handleauthenticateasync at microsoft aspnetcore authentication authenticationhandler authenticateasync at microsoft aspnetcore authentication authenticationservice authenticateasync httpcontext context string scheme at microsoft aspnetcore authentication authenticationmiddleware invoke httpcontext context at swashbuckle aspnetcore swaggerui swaggeruimiddleware invoke httpcontext httpcontext at swashbuckle aspnetcore swagger swaggermiddleware invoke httpcontext httpcontext iswaggerprovider swaggerprovider at microsoft aspnetcore diagnostics developerexceptionpagemiddleware invoke httpcontext context ,1
438847,30665172948.0,IssuesEvent,2023-07-25 17:42:33,basilisque-framework/CodeAnalysis,https://api.github.com/repos/basilisque-framework/CodeAnalysis,closed,Add readme and license,documentation,"- add a readme file with a basic description of the project
- add a license file",1.0,"Add readme and license - - add a readme file with a basic description of the project
- add a license file",0,add readme and license add a readme file with a basic description of the project add a license file,0
70992,18364993467.0,IssuesEvent,2021-10-09 22:34:27,google/mediapipe,https://api.github.com/repos/google/mediapipe,opened,MediaPipe Build Error - Building C++ command-line example - MediaPipe Hands,type:build/install,"Error building MediaPipe Hands example C++
[Command]
bazel build -c opt --define MEDIAPIPE_DISABLE_GPU=1 mediapipe/examples/desktop/hand_tracking:hand_tracking_cpu
[Error]
[1,007 / 3,675] 128 actions, 1 running
ERROR: /home/ubuntu/mediapipe/mediapipe/calculators/tensor/BUILD:657:11: C++ compilation of rule '//mediapipe/calculators/tensor:image_to_tensor_converter_o
pencv' failed (Exit 1): gcc failed: error executing command /usr/bin/gcc -U_FORTIFY_SOURCE -fstack-protector -Wall -Wunused-but-set-parameter -Wno-free-nonh
eap-object -fno-omit-frame-pointer -g0 -O2 '-D_FORTIFY_SOURCE=1' -DNDEBUG -ffunction-sections ... (remaining 57 argument(s) skipped)
Use --sandbox_debug to see verbose messages from the sandbox gcc failed: error executing command /usr/bin/gcc -U_FORTIFY_SOURCE -fstack-protector -Wall -Wun
used-but-set-parameter -Wno-free-nonheap-object -fno-omit-frame-pointer -g0 -O2 '-D_FORTIFY_SOURCE=1' -DNDEBUG -ffunction-sections ... (remaining 57 argumen
t(s) skipped)
Use --sandbox_debug to see verbose messages from the sandbox
mediapipe/calculators/tensor/image_to_tensor_converter_opencv.cc: In member function 'virtual absl::lts_20210324::StatusOr mediapipe::{an
onymous}::OpenCvProcessor::Convert(const mediapipe::Image&, const mediapipe::RotatedRect&, const mediapipe::Size&, float, float)':
mediapipe/calculators/tensor/image_to_tensor_converter_opencv.cc:106:12: error: could not convert 'tensor' from 'mediapipe::Tensor' to 'absl::lts_20210324::
StatusOr'
return tensor;
^~~~~~
Target //mediapipe/examples/desktop/hand_tracking:hand_tracking_cpu failed to build
[Environment]
Machine: Ubuntu 18.04 (running on Amazon Lightsail), 4GB ram 2 vCPUs, 80 GB SSD
gcc Version: 7.5.0
Bazel Version: 3.7.2
OpenCV Version: 3.2.0
python Version: 2.7.17
python3 Version: 3.6.9
Note: Hello World builds and runs correctly.
",1.0,"MediaPipe Build Error - Building C++ command-line example - MediaPipe Hands - Error building MediaPipe Hands example C++
[Command]
bazel build -c opt --define MEDIAPIPE_DISABLE_GPU=1 mediapipe/examples/desktop/hand_tracking:hand_tracking_cpu
[Error]
[1,007 / 3,675] 128 actions, 1 running
ERROR: /home/ubuntu/mediapipe/mediapipe/calculators/tensor/BUILD:657:11: C++ compilation of rule '//mediapipe/calculators/tensor:image_to_tensor_converter_o
pencv' failed (Exit 1): gcc failed: error executing command /usr/bin/gcc -U_FORTIFY_SOURCE -fstack-protector -Wall -Wunused-but-set-parameter -Wno-free-nonh
eap-object -fno-omit-frame-pointer -g0 -O2 '-D_FORTIFY_SOURCE=1' -DNDEBUG -ffunction-sections ... (remaining 57 argument(s) skipped)
Use --sandbox_debug to see verbose messages from the sandbox gcc failed: error executing command /usr/bin/gcc -U_FORTIFY_SOURCE -fstack-protector -Wall -Wun
used-but-set-parameter -Wno-free-nonheap-object -fno-omit-frame-pointer -g0 -O2 '-D_FORTIFY_SOURCE=1' -DNDEBUG -ffunction-sections ... (remaining 57 argumen
t(s) skipped)
Use --sandbox_debug to see verbose messages from the sandbox
mediapipe/calculators/tensor/image_to_tensor_converter_opencv.cc: In member function 'virtual absl::lts_20210324::StatusOr mediapipe::{an
onymous}::OpenCvProcessor::Convert(const mediapipe::Image&, const mediapipe::RotatedRect&, const mediapipe::Size&, float, float)':
mediapipe/calculators/tensor/image_to_tensor_converter_opencv.cc:106:12: error: could not convert 'tensor' from 'mediapipe::Tensor' to 'absl::lts_20210324::
StatusOr'
return tensor;
^~~~~~
Target //mediapipe/examples/desktop/hand_tracking:hand_tracking_cpu failed to build
[Environment]
Machine: Ubuntu 18.04 (running on Amazon Lightsail), 4GB ram 2 vCPUs, 80 GB SSD
gcc Version: 7.5.0
Bazel Version: 3.7.2
OpenCV Version: 3.2.0
python Version: 2.7.17
python3 Version: 3.6.9
Note: Hello World builds and runs correctly.
",0,mediapipe build error building c command line example mediapipe hands error building mediapipe hands example c bazel build c opt define mediapipe disable gpu mediapipe examples desktop hand tracking hand tracking cpu actions running error home ubuntu mediapipe mediapipe calculators tensor build c compilation of rule mediapipe calculators tensor image to tensor converter o pencv failed exit gcc failed error executing command usr bin gcc u fortify source fstack protector wall wunused but set parameter wno free nonh eap object fno omit frame pointer d fortify source dndebug ffunction sections remaining argument s skipped use sandbox debug to see verbose messages from the sandbox gcc failed error executing command usr bin gcc u fortify source fstack protector wall wun used but set parameter wno free nonheap object fno omit frame pointer d fortify source dndebug ffunction sections remaining argumen t s skipped use sandbox debug to see verbose messages from the sandbox mediapipe calculators tensor image to tensor converter opencv cc in member function virtual absl lts statusor mediapipe an onymous opencvprocessor convert const mediapipe image const mediapipe rotatedrect const mediapipe size float float mediapipe calculators tensor image to tensor converter opencv cc error could not convert tensor from mediapipe tensor to absl lts statusor return tensor target mediapipe examples desktop hand tracking hand tracking cpu failed to build machine ubuntu running on amazon lightsail ram vcpus gb ssd gcc version bazel version opencv version python version version note hello world builds and runs correctly ,0
270999,23576654294.0,IssuesEvent,2022-08-23 01:58:16,lowRISC/opentitan,https://api.github.com/repos/lowRISC/opentitan,opened,[rom-e2e] rom_e2e_bootstrap_enabled_not_requested,Priority:P2 Type:Task SW:ROM Milestone:V2 Component:RomE2eTest,"### Test point name
[rom_e2e_bootstrap_enabled_not_requested](https://cs.opensource.google/opentitan/opentitan/+/master:sw/device/silicon_creator/rom/data/rom_testplan.hjson?q=rom_e2e_bootstrap_enabled_not_requested)
### Host side component
Unknown
### OpenTitanTool infrastructure implemented
Unknown
### Contact person
@alphan
### Checklist
Please fill out this checklist as items are completed. Link to PRs and issues as appropriate.
- [ ] Check if existing test covers most or all of this testpoint (if so, either extend said test to cover all points, or skip the next 3 checkboxes)
- [ ] Device-side (C) component developed
- [ ] Bazel build rules developed
- [ ] Host-side component developed
- [ ] HJSON test plan updated with test name (so it shows up in the dashboard)
- [ ] Test added to dvsim nightly regression (and passing at time of checking)
### Verify that ROM does not enter bootstrap when enabled in OTP but not requested.
`OWNER_SW_CFG_ROM_BOOTSTRAP_EN` OTP item must be `kHardenedBoolTrue` (`0x739`).
- Do not apply bootstrap pin strapping.
- Reset the chip.
- Verify that the chip outputs the expected `BFV`: `0142500d` over UART.
- ROM will continously reset the chip and output the same `BFV`.
- Verify that the chip does not respond to `READ_STATUS` (`0x05`).
- The data on the CIPO line must be `0xff`.
",1.0,"[rom-e2e] rom_e2e_bootstrap_enabled_not_requested - ### Test point name
[rom_e2e_bootstrap_enabled_not_requested](https://cs.opensource.google/opentitan/opentitan/+/master:sw/device/silicon_creator/rom/data/rom_testplan.hjson?q=rom_e2e_bootstrap_enabled_not_requested)
### Host side component
Unknown
### OpenTitanTool infrastructure implemented
Unknown
### Contact person
@alphan
### Checklist
Please fill out this checklist as items are completed. Link to PRs and issues as appropriate.
- [ ] Check if existing test covers most or all of this testpoint (if so, either extend said test to cover all points, or skip the next 3 checkboxes)
- [ ] Device-side (C) component developed
- [ ] Bazel build rules developed
- [ ] Host-side component developed
- [ ] HJSON test plan updated with test name (so it shows up in the dashboard)
- [ ] Test added to dvsim nightly regression (and passing at time of checking)
### Verify that ROM does not enter bootstrap when enabled in OTP but not requested.
`OWNER_SW_CFG_ROM_BOOTSTRAP_EN` OTP item must be `kHardenedBoolTrue` (`0x739`).
- Do not apply bootstrap pin strapping.
- Reset the chip.
- Verify that the chip outputs the expected `BFV`: `0142500d` over UART.
- ROM will continously reset the chip and output the same `BFV`.
- Verify that the chip does not respond to `READ_STATUS` (`0x05`).
- The data on the CIPO line must be `0xff`.
",0, rom bootstrap enabled not requested test point name host side component unknown opentitantool infrastructure implemented unknown contact person alphan checklist please fill out this checklist as items are completed link to prs and issues as appropriate check if existing test covers most or all of this testpoint if so either extend said test to cover all points or skip the next checkboxes device side c component developed bazel build rules developed host side component developed hjson test plan updated with test name so it shows up in the dashboard test added to dvsim nightly regression and passing at time of checking verify that rom does not enter bootstrap when enabled in otp but not requested owner sw cfg rom bootstrap en otp item must be khardenedbooltrue do not apply bootstrap pin strapping reset the chip verify that the chip outputs the expected bfv over uart rom will continously reset the chip and output the same bfv verify that the chip does not respond to read status the data on the cipo line must be ,0
204778,15531592090.0,IssuesEvent,2021-03-14 00:30:02,backend-br/vagas,https://api.github.com/repos/backend-br/vagas,closed,[Remoto/Campinas] Node.js developer @HDN.Digital,Alocado CI Express MongoDB NodeJS Pleno Remoto Stale Testes Unitários,"
## Nossa empresa
Somos uma consultoria de transformação digital que está no mercado há 12 anos, e estamos em franca expansão.
Temos um time especializado em entregar soluções de tecnologia que agregam valor, otimizam o tempo e trazem inovação ao seu modo de trabalho. Nossa missão é colaborar, disseminar o conhecimento e a cultura de empresas através de novas tecnologias e processos ágeis.
Há mais de 10 anos temos a experiência em desenvolvimento de softwares e soluções digitais.
Faz parte do DNA da HDN entender a dor dos nossos clientes, se colocar no lugar dele e oferecer as soluções que melhor se adequam às específicas necessidades.
Toda mudança e transformação digital requer um primeiro passo, e estamos preparados para caminhar lado a lado.
Prezamos pelo desenvolvimento do colaborador tanto a nível técnico quanto a nível pessoal, temos um ambiente leve e divertido, queremos que você tenha voz e seja ouvido, tenha participação nos processos da empresa e encontre um ambiente amigável para crescer profissionalmente e construir sua carreira conosco!
## Descrição da vaga
Você vai atuar junto ao nosso time de engenheiros back-end para arquitetar e implementar soluções para diversos clientes (nacionais e internacionais) em projetos ágeis e desruptivos, com impacto direto no negócio. Queremos um profissional comunicativo, que ame codar e tenha responsabilidade sobre suas entregas.
## Local
Remoto ou Escritório, Campinas - Nova Campinas
## Requisitos
**Obrigatórios:**
- Experiência com Node e Express
**Desejáveis:**
- Conhecimentos com MongoDB e Postgres,
- Experiência com Testes unitários com Jest
- Experiência com testes de integração
- Conhecimentos em CI/CD
- Conhecimentos em React e/ou Vue
- Experiência com microsserviços
**Diferenciais:**
- Inglês avançado
## Benefícios
- 30 dias de férias remuneradas
**Diferenciais:**
- Sala de jogos
- Acesso aos cursos internos disponibilizados pela empresa
## Contratação
Cooperativa, a combinar
## Como se candidatar
Por favor envie um email para marcelo.pinheiro@hdnit.com.br com seu CV anexado - enviar no assunto: Vaga NodeJS
## Tempo médio de feedbacks
Costumamos enviar feedbacks em até 2 dias após cada processo.
E-mail para contato em caso de não haver resposta: contato@hdnit.com.br
## Labels
#### Alocação
- Alocado
- Remoto
#### Regime
- Cooperativa
#### Nível
- Júnior
- Pleno
- Sênior
",1.0,"[Remoto/Campinas] Node.js developer @HDN.Digital -
## Nossa empresa
Somos uma consultoria de transformação digital que está no mercado há 12 anos, e estamos em franca expansão.
Temos um time especializado em entregar soluções de tecnologia que agregam valor, otimizam o tempo e trazem inovação ao seu modo de trabalho. Nossa missão é colaborar, disseminar o conhecimento e a cultura de empresas através de novas tecnologias e processos ágeis.
Há mais de 10 anos temos a experiência em desenvolvimento de softwares e soluções digitais.
Faz parte do DNA da HDN entender a dor dos nossos clientes, se colocar no lugar dele e oferecer as soluções que melhor se adequam às específicas necessidades.
Toda mudança e transformação digital requer um primeiro passo, e estamos preparados para caminhar lado a lado.
Prezamos pelo desenvolvimento do colaborador tanto a nível técnico quanto a nível pessoal, temos um ambiente leve e divertido, queremos que você tenha voz e seja ouvido, tenha participação nos processos da empresa e encontre um ambiente amigável para crescer profissionalmente e construir sua carreira conosco!
## Descrição da vaga
Você vai atuar junto ao nosso time de engenheiros back-end para arquitetar e implementar soluções para diversos clientes (nacionais e internacionais) em projetos ágeis e desruptivos, com impacto direto no negócio. Queremos um profissional comunicativo, que ame codar e tenha responsabilidade sobre suas entregas.
## Local
Remoto ou Escritório, Campinas - Nova Campinas
## Requisitos
**Obrigatórios:**
- Experiência com Node e Express
**Desejáveis:**
- Conhecimentos com MongoDB e Postgres,
- Experiência com Testes unitários com Jest
- Experiência com testes de integração
- Conhecimentos em CI/CD
- Conhecimentos em React e/ou Vue
- Experiência com microsserviços
**Diferenciais:**
- Inglês avançado
## Benefícios
- 30 dias de férias remuneradas
**Diferenciais:**
- Sala de jogos
- Acesso aos cursos internos disponibilizados pela empresa
## Contratação
Cooperativa, a combinar
## Como se candidatar
Por favor envie um email para marcelo.pinheiro@hdnit.com.br com seu CV anexado - enviar no assunto: Vaga NodeJS
## Tempo médio de feedbacks
Costumamos enviar feedbacks em até 2 dias após cada processo.
E-mail para contato em caso de não haver resposta: contato@hdnit.com.br
## Labels
#### Alocação
- Alocado
- Remoto
#### Regime
- Cooperativa
#### Nível
- Júnior
- Pleno
- Sênior
",0, node js developer hdn digital nossa empresa somos uma consultoria de transformação digital que está no mercado há anos e estamos em franca expansão temos um time especializado em entregar soluções de tecnologia que agregam valor otimizam o tempo e trazem inovação ao seu modo de trabalho nossa missão é colaborar disseminar o conhecimento e a cultura de empresas através de novas tecnologias e processos ágeis há mais de anos temos a experiência em desenvolvimento de softwares e soluções digitais faz parte do dna da hdn entender a dor dos nossos clientes se colocar no lugar dele e oferecer as soluções que melhor se adequam às específicas necessidades toda mudança e transformação digital requer um primeiro passo e estamos preparados para caminhar lado a lado prezamos pelo desenvolvimento do colaborador tanto a nível técnico quanto a nível pessoal temos um ambiente leve e divertido queremos que você tenha voz e seja ouvido tenha participação nos processos da empresa e encontre um ambiente amigável para crescer profissionalmente e construir sua carreira conosco descrição da vaga você vai atuar junto ao nosso time de engenheiros back end para arquitetar e implementar soluções para diversos clientes nacionais e internacionais em projetos ágeis e desruptivos com impacto direto no negócio queremos um profissional comunicativo que ame codar e tenha responsabilidade sobre suas entregas local remoto ou escritório campinas nova campinas requisitos obrigatórios experiência com node e express desejáveis conhecimentos com mongodb e postgres experiência com testes unitários com jest experiência com testes de integração conhecimentos em ci cd conhecimentos em react e ou vue experiência com microsserviços diferenciais inglês avançado benefícios dias de férias remuneradas diferenciais sala de jogos acesso aos cursos internos disponibilizados pela empresa contratação cooperativa a combinar como se candidatar por favor envie um email para marcelo pinheiro hdnit com br com seu cv anexado enviar no assunto vaga nodejs tempo médio de feedbacks costumamos enviar feedbacks em até dias após cada processo e mail para contato em caso de não haver resposta contato hdnit com br labels alocação alocado remoto regime cooperativa nível júnior pleno sênior ,0
1386,19985127338.0,IssuesEvent,2022-01-30 14:45:53,lkrg-org/lkrg,https://api.github.com/repos/lkrg-org/lkrg,closed,Build fails on OpenSUSE leap's Linux 5.3.18-59.37,portability,"```
/__w/lkrg/lkrg/src/modules/database/JUMP_LABEL/p_arch_jump_label_transform_apply/p_arch_jump_label_transform_apply.c: In function 'p_arch_jump_label_transform_apply_entry':
/__w/lkrg/lkrg/src/modules/database/JUMP_LABEL/p_arch_jump_label_transform_apply/p_arch_jump_label_transform_apply.c:94:19: error: 'p_text_poke_loc {aka struct text_poke_loc}' has no member named 'detour'
&& p_tmp->detour) {
^~
make[3]: *** [/usr/src/linux-5.3.18-59.37/scripts/Makefile.build:288: /__w/lkrg/lkrg/src/modules/database/JUMP_LABEL
```",True,"Build fails on OpenSUSE leap's Linux 5.3.18-59.37 - ```
/__w/lkrg/lkrg/src/modules/database/JUMP_LABEL/p_arch_jump_label_transform_apply/p_arch_jump_label_transform_apply.c: In function 'p_arch_jump_label_transform_apply_entry':
/__w/lkrg/lkrg/src/modules/database/JUMP_LABEL/p_arch_jump_label_transform_apply/p_arch_jump_label_transform_apply.c:94:19: error: 'p_text_poke_loc {aka struct text_poke_loc}' has no member named 'detour'
&& p_tmp->detour) {
^~
make[3]: *** [/usr/src/linux-5.3.18-59.37/scripts/Makefile.build:288: /__w/lkrg/lkrg/src/modules/database/JUMP_LABEL
```",1,build fails on opensuse leap s linux w lkrg lkrg src modules database jump label p arch jump label transform apply p arch jump label transform apply c in function p arch jump label transform apply entry w lkrg lkrg src modules database jump label p arch jump label transform apply p arch jump label transform apply c error p text poke loc aka struct text poke loc has no member named detour p tmp detour make usr src linux scripts makefile build w lkrg lkrg src modules database jump label ,1
1434,21655175962.0,IssuesEvent,2022-05-06 13:28:25,damccorm/test-migration-target,https://api.github.com/repos/damccorm/test-migration-target,opened,Use portable WindowIntoPayload in DataflowRunner,P3 runner-dataflow portability task,"The Java-specific blobs transmitted to Dataflow need more context, in the form of portability framework protos.
Imported from Jira [BEAM-3514](https://issues.apache.org/jira/browse/BEAM-3514). Original Jira may contain additional context.
Reported by: kenn.
This issue has child subcomponents which were not migrated over. See the original Jira for more information.",True,"Use portable WindowIntoPayload in DataflowRunner - The Java-specific blobs transmitted to Dataflow need more context, in the form of portability framework protos.
Imported from Jira [BEAM-3514](https://issues.apache.org/jira/browse/BEAM-3514). Original Jira may contain additional context.
Reported by: kenn.
This issue has child subcomponents which were not migrated over. See the original Jira for more information.",1,use portable windowintopayload in dataflowrunner the java specific blobs transmitted to dataflow need more context in the form of portability framework protos imported from jira original jira may contain additional context reported by kenn this issue has child subcomponents which were not migrated over see the original jira for more information ,1
13,2577767565.0,IssuesEvent,2015-02-12 19:00:53,magnumripper/JohnTheRipper,https://api.github.com/repos/magnumripper/JohnTheRipper,closed,Can't build on Well,portability,"Some change to formats.c between b647486..5d1a14b has introduced this:
```
gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5)
gcc -DAC_BUILT -march=native -mavx -c -g -O2 -I/usr/local/include -I/opt/AMDAPP/include -I/usr/local/cuda/include -DARCH_LITTLE_ENDIAN=1 -Wall -Wdeclaration-after-statement -fomit-frame-pointer -Wno-deprecated-declarations -Wno-format-extra-args -D_GNU_SOURCE -DHAVE_CUDA -fopenmp -pthread -DHAVE_OPENCL -pthread -funroll-loops formats.c -o formats.o
/tmp/ccXscU4h.s: Assembler messages:
/tmp/ccXscU4h.s:407: Error: no such instruction: `vfmadd312sd .LC5(%rip),%xmm0,%xmm2'
make[1]: *** [formats.o] Error 1
make[1]: Leaving directory `/space/home/magnum/src/john/src'
make: *** [default] Error 2
```",True,"Can't build on Well - Some change to formats.c between b647486..5d1a14b has introduced this:
```
gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5)
gcc -DAC_BUILT -march=native -mavx -c -g -O2 -I/usr/local/include -I/opt/AMDAPP/include -I/usr/local/cuda/include -DARCH_LITTLE_ENDIAN=1 -Wall -Wdeclaration-after-statement -fomit-frame-pointer -Wno-deprecated-declarations -Wno-format-extra-args -D_GNU_SOURCE -DHAVE_CUDA -fopenmp -pthread -DHAVE_OPENCL -pthread -funroll-loops formats.c -o formats.o
/tmp/ccXscU4h.s: Assembler messages:
/tmp/ccXscU4h.s:407: Error: no such instruction: `vfmadd312sd .LC5(%rip),%xmm0,%xmm2'
make[1]: *** [formats.o] Error 1
make[1]: Leaving directory `/space/home/magnum/src/john/src'
make: *** [default] Error 2
```",1,can t build on well some change to formats c between has introduced this gcc version ubuntu linaro gcc dac built march native mavx c g i usr local include i opt amdapp include i usr local cuda include darch little endian wall wdeclaration after statement fomit frame pointer wno deprecated declarations wno format extra args d gnu source dhave cuda fopenmp pthread dhave opencl pthread funroll loops formats c o formats o tmp s assembler messages tmp s error no such instruction rip make error make leaving directory space home magnum src john src make error ,1
886,11771579257.0,IssuesEvent,2020-03-16 00:37:30,microsoft/vscode,https://api.github.com/repos/microsoft/vscode,closed,Portable Mode: Support auto update on Windows,feature-request portable-mode,"Issue Type: Bug
Having the portable/zip version of vscode doesn't autoupdate, I have to manually update it by copying the zip.
I'd prefer if auto update worked because I'd imagine you could intelligently remove old files and persist my data - right now I don't know if I should delete everything and paste the new stuff (and lose my settings) or if it's OK to layer.
VS Code version: Code 1.40.0 (86405ea23e3937316009fc27c9361deee66ffbf5, 2019-11-06T17:02:13.381Z)
OS version: Windows_NT x64 10.0.18362
",True,"Portable Mode: Support auto update on Windows - Issue Type: Bug
Having the portable/zip version of vscode doesn't autoupdate, I have to manually update it by copying the zip.
I'd prefer if auto update worked because I'd imagine you could intelligently remove old files and persist my data - right now I don't know if I should delete everything and paste the new stuff (and lose my settings) or if it's OK to layer.
VS Code version: Code 1.40.0 (86405ea23e3937316009fc27c9361deee66ffbf5, 2019-11-06T17:02:13.381Z)
OS version: Windows_NT x64 10.0.18362
",1,portable mode support auto update on windows issue type bug having the portable zip version of vscode doesn t autoupdate i have to manually update it by copying the zip i d prefer if auto update worked because i d imagine you could intelligently remove old files and persist my data right now i don t know if i should delete everything and paste the new stuff and lose my settings or if it s ok to layer vs code version code os version windows nt ,1
207895,16096927998.0,IssuesEvent,2021-04-27 02:09:10,padma-g/info478-project,https://api.github.com/repos/padma-g/info478-project,closed,Identify necessary technical skills and major challenges,documentation,"What new technical skills will we need to learn in order to complete the project?
What major challenges do we anticipate?",1.0,"Identify necessary technical skills and major challenges - What new technical skills will we need to learn in order to complete the project?
What major challenges do we anticipate?",0,identify necessary technical skills and major challenges what new technical skills will we need to learn in order to complete the project what major challenges do we anticipate ,0
1767,26029031687.0,IssuesEvent,2022-12-21 19:08:14,AzureAD/microsoft-authentication-library-for-dotnet,https://api.github.com/repos/AzureAD/microsoft-authentication-library-for-dotnet,closed,[Feature Request] Add support for LogCallback in ApplicationOptions ,Supportability,"
```csharp
var options = new ConfidentialClientApplicationOptions
{
ClientId = TestConstants.ClientId,
LogLevel = LogLevel.Verbose,
EnablePiiLogging = true,
IsDefaultPlatformLoggingEnabled = true,
LogCallback = (level, msg, pii) => { log(msg); } // Not Available!
};
var cca = ConfidentialClientApplicationBuilder.CreateWithApplicationOptions(options)
.Build();
```",True,"[Feature Request] Add support for LogCallback in ApplicationOptions -
```csharp
var options = new ConfidentialClientApplicationOptions
{
ClientId = TestConstants.ClientId,
LogLevel = LogLevel.Verbose,
EnablePiiLogging = true,
IsDefaultPlatformLoggingEnabled = true,
LogCallback = (level, msg, pii) => { log(msg); } // Not Available!
};
var cca = ConfidentialClientApplicationBuilder.CreateWithApplicationOptions(options)
.Build();
```",1, add support for logcallback in applicationoptions csharp var options new confidentialclientapplicationoptions clientid testconstants clientid loglevel loglevel verbose enablepiilogging true isdefaultplatformloggingenabled true logcallback level msg pii log msg not available var cca confidentialclientapplicationbuilder createwithapplicationoptions options build ,1
181973,21664471323.0,IssuesEvent,2022-05-07 01:27:40,scottstientjes/snipe-it,https://api.github.com/repos/scottstientjes/snipe-it,closed,WS-2018-0236 (Medium) detected in mem-1.1.0.tgz - autoclosed,security vulnerability,"## WS-2018-0236 - Medium Severity Vulnerability
Vulnerable Library - mem-1.1.0.tgz
Memoize functions - An optimization used to speed up consecutive function calls by caching the result of calls with identical input
Library home page: https://registry.npmjs.org/mem/-/mem-1.1.0.tgz
Path to dependency file: /tmp/ws-scm/snipe-it/package.json
Path to vulnerable library: /tmp/ws-scm/snipe-it/node_modules/mem/package.json
Dependency Hierarchy:
- laravel-mix-2.1.11.tgz (Root Library)
- yargs-8.0.2.tgz
- os-locale-2.1.0.tgz
- :x: **mem-1.1.0.tgz** (Vulnerable Library)
Found in HEAD commit: 35f2b36393de933b01f7dd715958a7a89a2d783b
Vulnerability Details
In nodejs-mem before version 4.0.0 there is a memory leak due to old results not being removed from the cache despite reaching maxAge. Exploitation of this can lead to exhaustion of memory and subsequent denial of service.
Publish Date: 2018-08-27
URL: WS-2018-0236
CVSS 2 Score Details (5.5 )
Base Score Metrics not available
Suggested Fix
Type: Upgrade version
Origin: https://bugzilla.redhat.com/show_bug.cgi?id=1623744
Release Date: 2019-05-30
Fix Resolution: 4.0.0
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"WS-2018-0236 (Medium) detected in mem-1.1.0.tgz - autoclosed - ## WS-2018-0236 - Medium Severity Vulnerability
Vulnerable Library - mem-1.1.0.tgz
Memoize functions - An optimization used to speed up consecutive function calls by caching the result of calls with identical input
Library home page: https://registry.npmjs.org/mem/-/mem-1.1.0.tgz
Path to dependency file: /tmp/ws-scm/snipe-it/package.json
Path to vulnerable library: /tmp/ws-scm/snipe-it/node_modules/mem/package.json
Dependency Hierarchy:
- laravel-mix-2.1.11.tgz (Root Library)
- yargs-8.0.2.tgz
- os-locale-2.1.0.tgz
- :x: **mem-1.1.0.tgz** (Vulnerable Library)
Found in HEAD commit: 35f2b36393de933b01f7dd715958a7a89a2d783b
Vulnerability Details
In nodejs-mem before version 4.0.0 there is a memory leak due to old results not being removed from the cache despite reaching maxAge. Exploitation of this can lead to exhaustion of memory and subsequent denial of service.
Publish Date: 2018-08-27
URL: WS-2018-0236
CVSS 2 Score Details (5.5 )
Base Score Metrics not available
Suggested Fix
Type: Upgrade version
Origin: https://bugzilla.redhat.com/show_bug.cgi?id=1623744
Release Date: 2019-05-30
Fix Resolution: 4.0.0
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,ws medium detected in mem tgz autoclosed ws medium severity vulnerability vulnerable library mem tgz memoize functions an optimization used to speed up consecutive function calls by caching the result of calls with identical input library home page a href path to dependency file tmp ws scm snipe it package json path to vulnerable library tmp ws scm snipe it node modules mem package json dependency hierarchy laravel mix tgz root library yargs tgz os locale tgz x mem tgz vulnerable library found in head commit a href vulnerability details in nodejs mem before version there is a memory leak due to old results not being removed from the cache despite reaching maxage exploitation of this can lead to exhaustion of memory and subsequent denial of service publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource ,0
761393,26678903179.0,IssuesEvent,2023-01-26 16:11:09,brave/brave-browser,https://api.github.com/repos/brave/brave-browser,closed,Add a loading state when switching between Filecoin Mainnet to Testnet,bug priority/P2 QA/Yes release-notes/include feature/web3/wallet OS/Desktop front-end-change feature/web3/wallet/filecoin,"
## Description
Add a loading state when switching between Filecoin Mainnet to Testnet
## Steps to Reproduce
1. Import Filecoin from Ledger
2. Switch to Testnet
3. Shows Mainnet accounts for a few seconds before loading Testnet accounts
## Actual result:
https://user-images.githubusercontent.com/17010094/171572162-db074d4a-eeba-4deb-bfac-fcffed8472e3.mov
## Expected result:
Add a loading state so user doesn't pick mainnet accounts for testnet
## Reproduces how often:
Easy
## Brave version (brave://version info)
Brave | 1.40.80 Chromium: 102.0.5005.78 (Official Build) beta (64-bit)
-- | --
Revision | `df6dbb5a9fd82af3f567198af2eb5fb4876ef99c-refs/branch-heads/5005_59@{#3}`
OS | All
## Version/Channel Information:
- Can you reproduce this issue with the current release? NA
- Can you reproduce this issue with the beta channel? Yes
- Can you reproduce this issue with the nightly channel? Yes
## Other Additional Information:
- Does the issue resolve itself when disabling Brave Shields? NA
- Does the issue resolve itself when disabling Brave Rewards? NA
- Is the issue reproducible on the latest version of Chrome? NA
## Miscellaneous Information:
cc: @Douglashdaniel @muliswilliam @cypt4 ",1.0,"Add a loading state when switching between Filecoin Mainnet to Testnet -
## Description
Add a loading state when switching between Filecoin Mainnet to Testnet
## Steps to Reproduce
1. Import Filecoin from Ledger
2. Switch to Testnet
3. Shows Mainnet accounts for a few seconds before loading Testnet accounts
## Actual result:
https://user-images.githubusercontent.com/17010094/171572162-db074d4a-eeba-4deb-bfac-fcffed8472e3.mov
## Expected result:
Add a loading state so user doesn't pick mainnet accounts for testnet
## Reproduces how often:
Easy
## Brave version (brave://version info)
Brave | 1.40.80 Chromium: 102.0.5005.78 (Official Build) beta (64-bit)
-- | --
Revision | `df6dbb5a9fd82af3f567198af2eb5fb4876ef99c-refs/branch-heads/5005_59@{#3}`
OS | All
## Version/Channel Information:
- Can you reproduce this issue with the current release? NA
- Can you reproduce this issue with the beta channel? Yes
- Can you reproduce this issue with the nightly channel? Yes
## Other Additional Information:
- Does the issue resolve itself when disabling Brave Shields? NA
- Does the issue resolve itself when disabling Brave Rewards? NA
- Is the issue reproducible on the latest version of Chrome? NA
## Miscellaneous Information:
cc: @Douglashdaniel @muliswilliam @cypt4 ",0,add a loading state when switching between filecoin mainnet to testnet have you searched for similar issues before submitting this issue please check the open issues and add a note before logging a new issue please use the template below to provide information about the issue insufficient info will get the issue closed it will only be reopened after sufficient info is provided description add a loading state when switching between filecoin mainnet to testnet steps to reproduce import filecoin from ledger switch to testnet shows mainnet accounts for a few seconds before loading testnet accounts actual result expected result add a loading state so user doesn t pick mainnet accounts for testnet reproduces how often easy brave version brave version info brave chromium official build beta bit revision refs branch heads os all version channel information can you reproduce this issue with the current release na can you reproduce this issue with the beta channel yes can you reproduce this issue with the nightly channel yes other additional information does the issue resolve itself when disabling brave shields na does the issue resolve itself when disabling brave rewards na is the issue reproducible on the latest version of chrome na miscellaneous information cc douglashdaniel muliswilliam ,0
1937,30476191690.0,IssuesEvent,2023-07-17 16:41:05,gemrb/gemrb,https://api.github.com/repos/gemrb/gemrb,opened,Check if anything is missing for our shipped xcode project to be replaced by an autogenerated one,enhancement janitorial portability,"Split from #1863 :
> I know cmake can be used to generate an Xcode project and that might work for us. Historically, we have our own Xcode because cmake wasn't quite good enough for our complex array of targets among other things. We also used to have high demand for the iOS build which cmake couldn't do either, but since the EEs nobody has asked about it so maybe we shouldn't care about it (and cmake may work now anyway).
Useful links:
https://cmake.org/cmake/help/git-stage/generator/Xcode.html
https://cmake.org/cmake/help/git-stage/variable/CMAKE_XCODE_ATTRIBUTE_an-attribute.html and some 50 more properties:
https://cmake.org/cmake/help/git-stage/manual/cmake-properties.7.html
",True,"Check if anything is missing for our shipped xcode project to be replaced by an autogenerated one - Split from #1863 :
> I know cmake can be used to generate an Xcode project and that might work for us. Historically, we have our own Xcode because cmake wasn't quite good enough for our complex array of targets among other things. We also used to have high demand for the iOS build which cmake couldn't do either, but since the EEs nobody has asked about it so maybe we shouldn't care about it (and cmake may work now anyway).
Useful links:
https://cmake.org/cmake/help/git-stage/generator/Xcode.html
https://cmake.org/cmake/help/git-stage/variable/CMAKE_XCODE_ATTRIBUTE_an-attribute.html and some 50 more properties:
https://cmake.org/cmake/help/git-stage/manual/cmake-properties.7.html
",1,check if anything is missing for our shipped xcode project to be replaced by an autogenerated one split from i know cmake can be used to generate an xcode project and that might work for us historically we have our own xcode because cmake wasn t quite good enough for our complex array of targets among other things we also used to have high demand for the ios build which cmake couldn t do either but since the ees nobody has asked about it so maybe we shouldn t care about it and cmake may work now anyway useful links and some more properties ,1
18525,6619278909.0,IssuesEvent,2017-09-21 11:34:23,grpc/grpc,https://api.github.com/repos/grpc/grpc,opened,Some C/C++ tests take unreasonable amount of time for what they're doing,infra/BUILDPONY priority/P1,"Problem:
our C/C++ PR test suites are capped at 1hr runtime (testcases stop to be scheduled after 1hr of runtime) but we are regularly seeing C++ asan and tsan test runs to exceed 120 mins runtime (and more).
Based on our analysis of runtime per-testcase, it seems some C++ tests cases are taking way too long for what they're doing (e.g. 36 min average / 170 min max runtime for a 1-second benchmark).
Here's the data sample, more can be seen in https://data.corp.google.com/sites/qc42qojbor0i/build_duration_pr/
I expect major improvments the amount of tests we will be able to run and reducing the PR tesuite spikes once the top offenders are addressed.
```
job_name | test_name | avg_duration_mins | max_duration_mins
grpc/ubuntu/pull_request/grpc_cpp_tsan | bins/tsan/json_run_localhost --scenarios_json '{""scenarios"": [{""name"": ""cpp_protobuf_sync_streaming_from_server_qps_unconstrained_secure"", ""warmup_seconds"": 0, ""benchmark_seconds"": 1, ""num_servers"": 1, ""server_config"": {""async_server_threads"": 0, ""channel_args"": [{""str_value"": ""throughput"", ""name"": ""grpc.optimization_target""}], ""security_params"": {""use_test_ca"": true, ""server_host_override"": ""foo.test.google.fr""}, ""threads_per_cq"": 3, ""server_type"": ""SYNC_SERVER""}, ""num_clients"": 0, ""client_config"": {""security_params"": {""use_test_ca"": true, ""server_host_override"": ""foo.test.google.fr""}, ""channel_args"": [{""str_value"": ""throughput"", ""name"": ""grpc.optimization_target""}], ""async_client_threads"": 0, ""outstanding_rpcs_per_channel"": 1, ""rpc_type"": ""STREAMING_FROM_SERVER"", ""payload_config"": {""simple_params"": {""resp_size"": 0, ""req_size"": 0}}, ""client_channels"": 64, ""threads_per_cq"": 3, ""load_params"": {""closed_loop"": {}}, ""client_type"": ""SYNC_CLIENT"", ""histogram_params"": {""max_possible"": 60000000000.0, ""resolution"": 0.01}}}]}' GRPC_POLL_STRATEGY=poll-cv | 36.50 | 175.70 |
grpc/ubuntu/pull_request/grpc_cpp_tsan | json_run_localhost:cpp_protobuf_sync_streaming_from_server_qps_unconstrained_secure_low_thread_count GRPC_POLL_STRATEGY=epollsig | 33.40 | 100.10 |
grpc/ubuntu/pull_request/grpc_cpp_asan | bins/asan/json_run_localhost --scenarios_json '{""scenarios"": [{""name"": ""cpp_protobuf_sync_streaming_from_server_qps_unconstrained_secure"", ""warmup_seconds"": 0, ""benchmark_seconds"": 1, ""num_servers"": 1, ""server_config"": {""async_server_threads"": 0, ""channel_args"": [{""str_value"": ""throughput"", ""name"": ""grpc.optimization_target""}], ""security_params"": {""use_test_ca"": true, ""server_host_override"": ""foo.test.google.fr""}, ""threads_per_cq"": 3, ""server_type"": ""SYNC_SERVER""}, ""num_clients"": 0, ""client_config"": {""security_params"": {""use_test_ca"": true, ""server_host_override"": ""foo.test.google.fr""}, ""channel_args"": [{""str_value"": ""throughput"", ""name"": ""grpc.optimization_target""}], ""async_client_threads"": 0, ""outstanding_rpcs_per_channel"": 1, ""rpc_type"": ""STREAMING_FROM_SERVER"", ""payload_config"": {""simple_params"": {""resp_size"": 0, ""req_size"": 0}}, ""client_channels"": 64, ""threads_per_cq"": 3, ""load_params"": {""closed_loop"": {}}, ""client_type"": ""SYNC_CLIENT"", ""histogram_params"": {""max_possible"": 60000000000.0, ""resolution"": 0.01}}}]}' GRPC_POLL_STRATEGY=epoll1 | 20.50 | 89.40 |
grpc/ubuntu/pull_request/grpc_cpp_tsan | bins/tsan/json_run_localhost --scenarios_json '{""scenarios"": [{""name"": ""cpp_protobuf_sync_streaming_from_server_qps_unconstrained_secure"", ""warmup_seconds"": 0, ""benchmark_seconds"": 1, ""num_servers"": 1, ""server_config"": {""async_server_threads"": 0, ""channel_args"": [{""str_value"": ""throughput"", ""name"": ""grpc.optimization_target""}], ""security_params"": {""use_test_ca"": true, ""server_host_override"": ""foo.test.google.fr""}, ""threads_per_cq"": 3, ""server_type"": ""SYNC_SERVER""}, ""num_clients"": 0, ""client_config"": {""security_params"": {""use_test_ca"": true, ""server_host_override"": ""foo.test.google.fr""}, ""channel_args"": [{""str_value"": ""throughput"", ""name"": ""grpc.optimization_target""}], ""async_client_threads"": 0, ""outstanding_rpcs_per_channel"": 1, ""rpc_type"": ""STREAMING_FROM_SERVER"", ""payload_config"": {""simple_params"": {""resp_size"": 0, ""req_size"": 0}}, ""client_channels"": 64, ""threads_per_cq"": 3, ""load_params"": {""closed_loop"": {}}, ""client_type"": ""SYNC_CLIENT"", ""histogram_params"": {""max_possible"": 60000000000.0, ""resolution"": 0.01}}}]}' GRPC_POLL_STRATEGY=epollsig | 29.40 | 52.90 |
grpc/ubuntu/pull_request/grpc_cpp_tsan | bins/tsan/json_run_localhost --scenarios_json '{""scenarios"": [{""name"": ""cpp_protobuf_sync_streaming_from_server_qps_unconstrained_secure"", ""warmup_seconds"": 0, ""benchmark_seconds"": 1, ""num_servers"": 1, ""server_config"": {""async_server_threads"": 0, ""channel_args"": [{""str_value"": ""throughput"", ""name"": ""grpc.optimization_target""}], ""security_params"": {""use_test_ca"": true, ""server_host_override"": ""foo.test.google.fr""}, ""threads_per_cq"": 3, ""server_type"": ""SYNC_SERVER""}, ""num_clients"": 0, ""client_config"": {""security_params"": {""use_test_ca"": true, ""server_host_override"": ""foo.test.google.fr""}, ""channel_args"": [{""str_value"": ""throughput"", ""name"": ""grpc.optimization_target""}], ""async_client_threads"": 0, ""outstanding_rpcs_per_channel"": 1, ""rpc_type"": ""STREAMING_FROM_SERVER"", ""payload_config"": {""simple_params"": {""resp_size"": 0, ""req_size"": 0}}, ""client_channels"": 64, ""threads_per_cq"": 3, ""load_params"": {""closed_loop"": {}}, ""client_type"": ""SYNC_CLIENT"", ""histogram_params"": {""max_possible"": 60000000000.0, ""resolution"": 0.01}}}]}' GRPC_POLL_STRATEGY=epoll1 | 11.60 | 43.30
```",1.0,"Some C/C++ tests take unreasonable amount of time for what they're doing - Problem:
our C/C++ PR test suites are capped at 1hr runtime (testcases stop to be scheduled after 1hr of runtime) but we are regularly seeing C++ asan and tsan test runs to exceed 120 mins runtime (and more).
Based on our analysis of runtime per-testcase, it seems some C++ tests cases are taking way too long for what they're doing (e.g. 36 min average / 170 min max runtime for a 1-second benchmark).
Here's the data sample, more can be seen in https://data.corp.google.com/sites/qc42qojbor0i/build_duration_pr/
I expect major improvments the amount of tests we will be able to run and reducing the PR tesuite spikes once the top offenders are addressed.
```
job_name | test_name | avg_duration_mins | max_duration_mins
grpc/ubuntu/pull_request/grpc_cpp_tsan | bins/tsan/json_run_localhost --scenarios_json '{""scenarios"": [{""name"": ""cpp_protobuf_sync_streaming_from_server_qps_unconstrained_secure"", ""warmup_seconds"": 0, ""benchmark_seconds"": 1, ""num_servers"": 1, ""server_config"": {""async_server_threads"": 0, ""channel_args"": [{""str_value"": ""throughput"", ""name"": ""grpc.optimization_target""}], ""security_params"": {""use_test_ca"": true, ""server_host_override"": ""foo.test.google.fr""}, ""threads_per_cq"": 3, ""server_type"": ""SYNC_SERVER""}, ""num_clients"": 0, ""client_config"": {""security_params"": {""use_test_ca"": true, ""server_host_override"": ""foo.test.google.fr""}, ""channel_args"": [{""str_value"": ""throughput"", ""name"": ""grpc.optimization_target""}], ""async_client_threads"": 0, ""outstanding_rpcs_per_channel"": 1, ""rpc_type"": ""STREAMING_FROM_SERVER"", ""payload_config"": {""simple_params"": {""resp_size"": 0, ""req_size"": 0}}, ""client_channels"": 64, ""threads_per_cq"": 3, ""load_params"": {""closed_loop"": {}}, ""client_type"": ""SYNC_CLIENT"", ""histogram_params"": {""max_possible"": 60000000000.0, ""resolution"": 0.01}}}]}' GRPC_POLL_STRATEGY=poll-cv | 36.50 | 175.70 |
grpc/ubuntu/pull_request/grpc_cpp_tsan | json_run_localhost:cpp_protobuf_sync_streaming_from_server_qps_unconstrained_secure_low_thread_count GRPC_POLL_STRATEGY=epollsig | 33.40 | 100.10 |
grpc/ubuntu/pull_request/grpc_cpp_asan | bins/asan/json_run_localhost --scenarios_json '{""scenarios"": [{""name"": ""cpp_protobuf_sync_streaming_from_server_qps_unconstrained_secure"", ""warmup_seconds"": 0, ""benchmark_seconds"": 1, ""num_servers"": 1, ""server_config"": {""async_server_threads"": 0, ""channel_args"": [{""str_value"": ""throughput"", ""name"": ""grpc.optimization_target""}], ""security_params"": {""use_test_ca"": true, ""server_host_override"": ""foo.test.google.fr""}, ""threads_per_cq"": 3, ""server_type"": ""SYNC_SERVER""}, ""num_clients"": 0, ""client_config"": {""security_params"": {""use_test_ca"": true, ""server_host_override"": ""foo.test.google.fr""}, ""channel_args"": [{""str_value"": ""throughput"", ""name"": ""grpc.optimization_target""}], ""async_client_threads"": 0, ""outstanding_rpcs_per_channel"": 1, ""rpc_type"": ""STREAMING_FROM_SERVER"", ""payload_config"": {""simple_params"": {""resp_size"": 0, ""req_size"": 0}}, ""client_channels"": 64, ""threads_per_cq"": 3, ""load_params"": {""closed_loop"": {}}, ""client_type"": ""SYNC_CLIENT"", ""histogram_params"": {""max_possible"": 60000000000.0, ""resolution"": 0.01}}}]}' GRPC_POLL_STRATEGY=epoll1 | 20.50 | 89.40 |
grpc/ubuntu/pull_request/grpc_cpp_tsan | bins/tsan/json_run_localhost --scenarios_json '{""scenarios"": [{""name"": ""cpp_protobuf_sync_streaming_from_server_qps_unconstrained_secure"", ""warmup_seconds"": 0, ""benchmark_seconds"": 1, ""num_servers"": 1, ""server_config"": {""async_server_threads"": 0, ""channel_args"": [{""str_value"": ""throughput"", ""name"": ""grpc.optimization_target""}], ""security_params"": {""use_test_ca"": true, ""server_host_override"": ""foo.test.google.fr""}, ""threads_per_cq"": 3, ""server_type"": ""SYNC_SERVER""}, ""num_clients"": 0, ""client_config"": {""security_params"": {""use_test_ca"": true, ""server_host_override"": ""foo.test.google.fr""}, ""channel_args"": [{""str_value"": ""throughput"", ""name"": ""grpc.optimization_target""}], ""async_client_threads"": 0, ""outstanding_rpcs_per_channel"": 1, ""rpc_type"": ""STREAMING_FROM_SERVER"", ""payload_config"": {""simple_params"": {""resp_size"": 0, ""req_size"": 0}}, ""client_channels"": 64, ""threads_per_cq"": 3, ""load_params"": {""closed_loop"": {}}, ""client_type"": ""SYNC_CLIENT"", ""histogram_params"": {""max_possible"": 60000000000.0, ""resolution"": 0.01}}}]}' GRPC_POLL_STRATEGY=epollsig | 29.40 | 52.90 |
grpc/ubuntu/pull_request/grpc_cpp_tsan | bins/tsan/json_run_localhost --scenarios_json '{""scenarios"": [{""name"": ""cpp_protobuf_sync_streaming_from_server_qps_unconstrained_secure"", ""warmup_seconds"": 0, ""benchmark_seconds"": 1, ""num_servers"": 1, ""server_config"": {""async_server_threads"": 0, ""channel_args"": [{""str_value"": ""throughput"", ""name"": ""grpc.optimization_target""}], ""security_params"": {""use_test_ca"": true, ""server_host_override"": ""foo.test.google.fr""}, ""threads_per_cq"": 3, ""server_type"": ""SYNC_SERVER""}, ""num_clients"": 0, ""client_config"": {""security_params"": {""use_test_ca"": true, ""server_host_override"": ""foo.test.google.fr""}, ""channel_args"": [{""str_value"": ""throughput"", ""name"": ""grpc.optimization_target""}], ""async_client_threads"": 0, ""outstanding_rpcs_per_channel"": 1, ""rpc_type"": ""STREAMING_FROM_SERVER"", ""payload_config"": {""simple_params"": {""resp_size"": 0, ""req_size"": 0}}, ""client_channels"": 64, ""threads_per_cq"": 3, ""load_params"": {""closed_loop"": {}}, ""client_type"": ""SYNC_CLIENT"", ""histogram_params"": {""max_possible"": 60000000000.0, ""resolution"": 0.01}}}]}' GRPC_POLL_STRATEGY=epoll1 | 11.60 | 43.30
```",0,some c c tests take unreasonable amount of time for what they re doing problem our c c pr test suites are capped at runtime testcases stop to be scheduled after of runtime but we are regularly seeing c asan and tsan test runs to exceed mins runtime and more based on our analysis of runtime per testcase it seems some c tests cases are taking way too long for what they re doing e g min average min max runtime for a second benchmark here s the data sample more can be seen in i expect major improvments the amount of tests we will be able to run and reducing the pr tesuite spikes once the top offenders are addressed job name test name avg duration mins max duration mins grpc ubuntu pull request grpc cpp tsan bins tsan json run localhost scenarios json scenarios security params use test ca true server host override foo test google fr threads per cq server type sync server num clients client config security params use test ca true server host override foo test google fr channel args async client threads outstanding rpcs per channel rpc type streaming from server payload config simple params resp size req size client channels threads per cq load params closed loop client type sync client histogram params max possible resolution grpc poll strategy poll cv grpc ubuntu pull request grpc cpp tsan json run localhost cpp protobuf sync streaming from server qps unconstrained secure low thread count grpc poll strategy epollsig grpc ubuntu pull request grpc cpp asan bins asan json run localhost scenarios json scenarios security params use test ca true server host override foo test google fr threads per cq server type sync server num clients client config security params use test ca true server host override foo test google fr channel args async client threads outstanding rpcs per channel rpc type streaming from server payload config simple params resp size req size client channels threads per cq load params closed loop client type sync client histogram params max possible resolution grpc poll strategy grpc ubuntu pull request grpc cpp tsan bins tsan json run localhost scenarios json scenarios security params use test ca true server host override foo test google fr threads per cq server type sync server num clients client config security params use test ca true server host override foo test google fr channel args async client threads outstanding rpcs per channel rpc type streaming from server payload config simple params resp size req size client channels threads per cq load params closed loop client type sync client histogram params max possible resolution grpc poll strategy epollsig grpc ubuntu pull request grpc cpp tsan bins tsan json run localhost scenarios json scenarios security params use test ca true server host override foo test google fr threads per cq server type sync server num clients client config security params use test ca true server host override foo test google fr channel args async client threads outstanding rpcs per channel rpc type streaming from server payload config simple params resp size req size client channels threads per cq load params closed loop client type sync client histogram params max possible resolution grpc poll strategy ,0
1091,13926082334.0,IssuesEvent,2020-10-21 17:46:16,OpenSCAP/openscap,https://api.github.com/repos/OpenSCAP/openscap,closed,Building from source as debian,help wanted portability,"I have a newbie question, and sorry for noise,
I'm trying to build it as debian and i'm hitting the vendor_perl versus the perl5/5.30 issue. In CMakefile for swig/perl it uses vendor_perl instead what it used previously in version 1.2.17 : vendorarch=""$( $PERL -e 'use Config; print $Config{vendorarch}' | sed ""s|$($PERL -e 'use Config; print $Config{prefix}')||"" )
My question is that was intentional? Is vendor_perl supposed to be resolved somehow by dh_help? Or is that bug?
That is the issue msg:
dh_install: warning: Cannot find (any matches for) ""usr/lib/x86_64-linux-gnu/perl5/5.30"" (tried in ., debian/tmp)
dh_install: warning: libopenscap-perl missing files: usr/lib/x86_64-linux-gnu/perl5/5.30
And it happens because /debian/tmp/usr/lib/x86-64-linux-gnu/perl5 has /vendor_perl instead 5.30.
Thanks in advance",True,"Building from source as debian - I have a newbie question, and sorry for noise,
I'm trying to build it as debian and i'm hitting the vendor_perl versus the perl5/5.30 issue. In CMakefile for swig/perl it uses vendor_perl instead what it used previously in version 1.2.17 : vendorarch=""$( $PERL -e 'use Config; print $Config{vendorarch}' | sed ""s|$($PERL -e 'use Config; print $Config{prefix}')||"" )
My question is that was intentional? Is vendor_perl supposed to be resolved somehow by dh_help? Or is that bug?
That is the issue msg:
dh_install: warning: Cannot find (any matches for) ""usr/lib/x86_64-linux-gnu/perl5/5.30"" (tried in ., debian/tmp)
dh_install: warning: libopenscap-perl missing files: usr/lib/x86_64-linux-gnu/perl5/5.30
And it happens because /debian/tmp/usr/lib/x86-64-linux-gnu/perl5 has /vendor_perl instead 5.30.
Thanks in advance",1,building from source as debian i have a newbie question and sorry for noise i m trying to build it as debian and i m hitting the vendor perl versus the issue in cmakefile for swig perl it uses vendor perl instead what it used previously in version vendorarch perl e use config print config vendorarch sed s perl e use config print config prefix my question is that was intentional is vendor perl supposed to be resolved somehow by dh help or is that bug that is the issue msg dh install warning cannot find any matches for usr lib linux gnu tried in debian tmp dh install warning libopenscap perl missing files usr lib linux gnu and it happens because debian tmp usr lib linux gnu has vendor perl instead thanks in advance,1
1946,30567722808.0,IssuesEvent,2023-07-20 19:10:40,instantiations/tonel-vast,https://api.github.com/repos/instantiations/tonel-vast,closed,Normalize instance variables names when creating TonelReaderClassDefinition,enhancement portability importing,"Currently the instance variable names defined in in a Tonel class definition are instantiated parsing the literals with STON.
E.g.
```smalltalk
Class {
#name : #SBSActionableBadgeTag,
#superclass : #SBSAnchorTag,
#instVars : [
#modifier,
#contextStyle
],
#category : #'Bootstrap5-Core-Base'
}
```
But if the `#instVars` attribute defines the instance variable names as symbols, then this could cause issues when comparing with a VAST class that answers instances of Strings as a response to `#instVarNames`.
So it is better to normalize the elements in the collection to be instances of `String` when instantiating `TonelReaderClassDefinition`.",True,"Normalize instance variables names when creating TonelReaderClassDefinition - Currently the instance variable names defined in in a Tonel class definition are instantiated parsing the literals with STON.
E.g.
```smalltalk
Class {
#name : #SBSActionableBadgeTag,
#superclass : #SBSAnchorTag,
#instVars : [
#modifier,
#contextStyle
],
#category : #'Bootstrap5-Core-Base'
}
```
But if the `#instVars` attribute defines the instance variable names as symbols, then this could cause issues when comparing with a VAST class that answers instances of Strings as a response to `#instVarNames`.
So it is better to normalize the elements in the collection to be instances of `String` when instantiating `TonelReaderClassDefinition`.",1,normalize instance variables names when creating tonelreaderclassdefinition currently the instance variable names defined in in a tonel class definition are instantiated parsing the literals with ston e g smalltalk class name sbsactionablebadgetag superclass sbsanchortag instvars modifier contextstyle category core base but if the instvars attribute defines the instance variable names as symbols then this could cause issues when comparing with a vast class that answers instances of strings as a response to instvarnames so it is better to normalize the elements in the collection to be instances of string when instantiating tonelreaderclassdefinition ,1
650,8682283488.0,IssuesEvent,2018-12-02 06:16:50,chapel-lang/chapel,https://api.github.com/repos/chapel-lang/chapel,closed,Support Python 3,area: BTR area: Third-Party issues week type: Portability,"Python 3 as default `python` is becoming more common. Notably, Homebrew recently made python 3 the default when doing `brew install python`. Also, [Ubuntu 18.0.4](https://wiki.ubuntu.com/Python/Python36Transition) (releases April 26th, 2018) will switch to python 3 as default.
Consequently, it would be worthwhile to support python 3 for most if not all components of Chapel, including developer features.
[PEP 394](https://www.python.org/dev/peps/pep-0394/) describes how projects can prepare for this transition. Namely, we should use `python2` or `python3` in the shebang to specify which python is supported, if the program does not support both `python2` and `python3`.
Note that Chapel will need to continue supporting Python 2.6.9 until Chapel drops support for CLE 5 / SLES 11 [(EOL March, 2019)](https://www.suse.com/lifecycle/).
Here is a rough overview of current python support, by component:
| | 2.6.9 | 2.7.14 | 3.6.0 |
|-------------------------|--------------------|--------------------|--------------------|
| `printchplenv` | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| `make chpldoc` | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| `make docs` | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| `make test-venv` | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| Testing system | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| `c2chapel` | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| Other `util/` scripts | ? | :white_check_mark: | ? |
| `chpl --library-python` | :x: | :x: | :white_check_mark: |
",True,"Support Python 3 - Python 3 as default `python` is becoming more common. Notably, Homebrew recently made python 3 the default when doing `brew install python`. Also, [Ubuntu 18.0.4](https://wiki.ubuntu.com/Python/Python36Transition) (releases April 26th, 2018) will switch to python 3 as default.
Consequently, it would be worthwhile to support python 3 for most if not all components of Chapel, including developer features.
[PEP 394](https://www.python.org/dev/peps/pep-0394/) describes how projects can prepare for this transition. Namely, we should use `python2` or `python3` in the shebang to specify which python is supported, if the program does not support both `python2` and `python3`.
Note that Chapel will need to continue supporting Python 2.6.9 until Chapel drops support for CLE 5 / SLES 11 [(EOL March, 2019)](https://www.suse.com/lifecycle/).
Here is a rough overview of current python support, by component:
| | 2.6.9 | 2.7.14 | 3.6.0 |
|-------------------------|--------------------|--------------------|--------------------|
| `printchplenv` | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| `make chpldoc` | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| `make docs` | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| `make test-venv` | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| Testing system | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| `c2chapel` | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| Other `util/` scripts | ? | :white_check_mark: | ? |
| `chpl --library-python` | :x: | :x: | :white_check_mark: |
",1,support python python as default python is becoming more common notably homebrew recently made python the default when doing brew install python also releases april will switch to python as default consequently it would be worthwhile to support python for most if not all components of chapel including developer features describes how projects can prepare for this transition namely we should use or in the shebang to specify which python is supported if the program does not support both and note that chapel will need to continue supporting python until chapel drops support for cle sles here is a rough overview of current python support by component printchplenv white check mark white check mark white check mark make chpldoc white check mark white check mark white check mark make docs white check mark white check mark white check mark make test venv white check mark white check mark white check mark testing system white check mark white check mark white check mark white check mark white check mark white check mark other util scripts white check mark chpl library python x x white check mark ,1
1526,22156120321.0,IssuesEvent,2022-06-03 22:57:53,apache/beam,https://api.github.com/repos/apache/beam,opened,Flink portable runner GRPC cleanup failure after user class loader was removed,portability P3 improvement runner-flink,"Looks like another attempt to perform cleanup after close.
Imported from Jira [BEAM-5397](https://issues.apache.org/jira/browse/BEAM-5397). Original Jira may contain additional context.
Reported by: thw.",True,"Flink portable runner GRPC cleanup failure after user class loader was removed - Looks like another attempt to perform cleanup after close.
Imported from Jira [BEAM-5397](https://issues.apache.org/jira/browse/BEAM-5397). Original Jira may contain additional context.
Reported by: thw.",1,flink portable runner grpc cleanup failure after user class loader was removed looks like another attempt to perform cleanup after close imported from jira original jira may contain additional context reported by thw ,1
1227,2534223336.0,IssuesEvent,2015-01-24 18:45:22,mathjax/MathJax,https://api.github.com/repos/mathjax/MathJax,closed,Regression in v2.5 beta with handling of LaTeX `label` command,Accepted Merged QA - Unit Test Wanted,"There seems to be some weird regressions in the handling of TeX equations with a `\label` command with v2.5. The problem seems to be on the parsing side since it happens with the default configuration that doesn't even have equation numbering turned on.
The following parses correctly with the current v2.4 off CDN

but if you change it to the v2.5 beta, it refuses to parse

removing the label command causes it to parse correctly again

",1.0,"Regression in v2.5 beta with handling of LaTeX `label` command - There seems to be some weird regressions in the handling of TeX equations with a `\label` command with v2.5. The problem seems to be on the parsing side since it happens with the default configuration that doesn't even have equation numbering turned on.
The following parses correctly with the current v2.4 off CDN

but if you change it to the v2.5 beta, it refuses to parse

removing the label command causes it to parse correctly again

",0,regression in beta with handling of latex label command there seems to be some weird regressions in the handling of tex equations with a label command with the problem seems to be on the parsing side since it happens with the default configuration that doesn t even have equation numbering turned on the following parses correctly with the current off cdn but if you change it to the beta it refuses to parse removing the label command causes it to parse correctly again ,0
32442,12127238841.0,IssuesEvent,2020-04-22 18:22:53,silinternational/serverless-mfa-api,https://api.github.com/repos/silinternational/serverless-mfa-api,opened,CVE-2012-6708 (Medium) detected in jquery-1.7.2.min.js,security vulnerability,"## CVE-2012-6708 - Medium Severity Vulnerability
Vulnerable Library - jquery-1.7.2.min.js
JavaScript library for DOM operations
Library home page: https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.2/jquery.min.js
Path to dependency file: /tmp/ws-scm/serverless-mfa-api/node_modules/jmespath/index.html
Path to vulnerable library: /serverless-mfa-api/node_modules/jmespath/index.html
Dependency Hierarchy:
- :x: **jquery-1.7.2.min.js** (Vulnerable Library)
Found in HEAD commit: bc7a5cb545c98937d5fc3a8b979879b0177a757a
Vulnerability Details
jQuery before 1.9.0 is vulnerable to Cross-site Scripting (XSS) attacks. The jQuery(strInput) function does not differentiate selectors from HTML in a reliable fashion. In vulnerable versions, jQuery determined whether the input was HTML by looking for the '<' character anywhere in the string, giving attackers more flexibility when attempting to construct a malicious payload. In fixed versions, jQuery only deems the input to be HTML if it explicitly starts with the '<' character, limiting exploitability only to attackers who can control the beginning of a string, which is far less common.
Publish Date: 2018-01-18
URL: CVE-2012-6708
CVSS 3 Score Details (6.1 )
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
For more information on CVSS3 Scores, click here .
Suggested Fix
Type: Upgrade version
Origin: https://nvd.nist.gov/vuln/detail/CVE-2012-6708
Release Date: 2018-01-18
Fix Resolution: jQuery - v1.9.0
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2012-6708 (Medium) detected in jquery-1.7.2.min.js - ## CVE-2012-6708 - Medium Severity Vulnerability
Vulnerable Library - jquery-1.7.2.min.js
JavaScript library for DOM operations
Library home page: https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.2/jquery.min.js
Path to dependency file: /tmp/ws-scm/serverless-mfa-api/node_modules/jmespath/index.html
Path to vulnerable library: /serverless-mfa-api/node_modules/jmespath/index.html
Dependency Hierarchy:
- :x: **jquery-1.7.2.min.js** (Vulnerable Library)
Found in HEAD commit: bc7a5cb545c98937d5fc3a8b979879b0177a757a
Vulnerability Details
jQuery before 1.9.0 is vulnerable to Cross-site Scripting (XSS) attacks. The jQuery(strInput) function does not differentiate selectors from HTML in a reliable fashion. In vulnerable versions, jQuery determined whether the input was HTML by looking for the '<' character anywhere in the string, giving attackers more flexibility when attempting to construct a malicious payload. In fixed versions, jQuery only deems the input to be HTML if it explicitly starts with the '<' character, limiting exploitability only to attackers who can control the beginning of a string, which is far less common.
Publish Date: 2018-01-18
URL: CVE-2012-6708
CVSS 3 Score Details (6.1 )
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
For more information on CVSS3 Scores, click here .
Suggested Fix
Type: Upgrade version
Origin: https://nvd.nist.gov/vuln/detail/CVE-2012-6708
Release Date: 2018-01-18
Fix Resolution: jQuery - v1.9.0
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve medium detected in jquery min js cve medium severity vulnerability vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file tmp ws scm serverless mfa api node modules jmespath index html path to vulnerable library serverless mfa api node modules jmespath index html dependency hierarchy x jquery min js vulnerable library found in head commit a href vulnerability details jquery before is vulnerable to cross site scripting xss attacks the jquery strinput function does not differentiate selectors from html in a reliable fashion in vulnerable versions jquery determined whether the input was html by looking for the character anywhere in the string giving attackers more flexibility when attempting to construct a malicious payload in fixed versions jquery only deems the input to be html if it explicitly starts with the character limiting exploitability only to attackers who can control the beginning of a string which is far less common publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery step up your open source security game with whitesource ,0
197938,22617763559.0,IssuesEvent,2022-06-30 01:06:16,turkdevops/vue-cli,https://api.github.com/repos/turkdevops/vue-cli,opened,CVE-2022-0722 (Medium) detected in parse-url-5.0.1.tgz,security vulnerability,"## CVE-2022-0722 - Medium Severity Vulnerability
Vulnerable Library - parse-url-5.0.1.tgz
An advanced url parser supporting git urls too.
Library home page: https://registry.npmjs.org/parse-url/-/parse-url-5.0.1.tgz
Path to dependency file: /package.json
Path to vulnerable library: /node_modules/parse-url
Dependency Hierarchy:
- lerna-3.13.4.tgz (Root Library)
- version-3.13.4.tgz
- github-client-3.13.3.tgz
- git-url-parse-11.1.2.tgz
- git-up-4.0.1.tgz
- :x: **parse-url-5.0.1.tgz** (Vulnerable Library)
Found in HEAD commit: b9888ec61e269386b4fab790d7d16670ad49b548
Found in base branch: fix-babel-core-js
Vulnerability Details
Exposure of Sensitive Information to an Unauthorized Actor in GitHub repository ionicabizau/parse-url prior to 7.0.0.
Publish Date: 2022-06-27
URL: CVE-2022-0722
CVSS 3 Score Details (4.8 )
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
For more information on CVSS3 Scores, click here .
Suggested Fix
Type: Upgrade version
Origin: https://huntr.dev/bounties/2490ef6d-5577-4714-a4dd-9608251b4226
Release Date: 2022-06-27
Fix Resolution: parse-url - 6.0.1
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2022-0722 (Medium) detected in parse-url-5.0.1.tgz - ## CVE-2022-0722 - Medium Severity Vulnerability
Vulnerable Library - parse-url-5.0.1.tgz
An advanced url parser supporting git urls too.
Library home page: https://registry.npmjs.org/parse-url/-/parse-url-5.0.1.tgz
Path to dependency file: /package.json
Path to vulnerable library: /node_modules/parse-url
Dependency Hierarchy:
- lerna-3.13.4.tgz (Root Library)
- version-3.13.4.tgz
- github-client-3.13.3.tgz
- git-url-parse-11.1.2.tgz
- git-up-4.0.1.tgz
- :x: **parse-url-5.0.1.tgz** (Vulnerable Library)
Found in HEAD commit: b9888ec61e269386b4fab790d7d16670ad49b548
Found in base branch: fix-babel-core-js
Vulnerability Details
Exposure of Sensitive Information to an Unauthorized Actor in GitHub repository ionicabizau/parse-url prior to 7.0.0.
Publish Date: 2022-06-27
URL: CVE-2022-0722
CVSS 3 Score Details (4.8 )
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
For more information on CVSS3 Scores, click here .
Suggested Fix
Type: Upgrade version
Origin: https://huntr.dev/bounties/2490ef6d-5577-4714-a4dd-9608251b4226
Release Date: 2022-06-27
Fix Resolution: parse-url - 6.0.1
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve medium detected in parse url tgz cve medium severity vulnerability vulnerable library parse url tgz an advanced url parser supporting git urls too library home page a href path to dependency file package json path to vulnerable library node modules parse url dependency hierarchy lerna tgz root library version tgz github client tgz git url parse tgz git up tgz x parse url tgz vulnerable library found in head commit a href found in base branch fix babel core js vulnerability details exposure of sensitive information to an unauthorized actor in github repository ionicabizau parse url prior to publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution parse url step up your open source security game with mend ,0
1825,26904637070.0,IssuesEvent,2023-02-06 18:03:46,chapel-lang/chapel,https://api.github.com/repos/chapel-lang/chapel,closed,Installation issue for version 1.25.1,area: Makefiles / Scripts user issue type: Portability,"I was attempting to install version 1.25.1 when
I tried updating the PATH using the following command:
CHPL_BIN_SUBDIR=`""$CHPL_HOME""/util/chplenv/chpl_bin_subdir.py`
I recieved the following error message:
(its actually the last part of the Traceback)
File ""/home/kurt/chapel-1.25.1/util/chplenv/chpl_compiler.py"", line 6, in
from distutils.spawn import find_executable
ModuleNotFoundError: No module named 'distutils.spawn'
Any suggestions?
",True,"Installation issue for version 1.25.1 - I was attempting to install version 1.25.1 when
I tried updating the PATH using the following command:
CHPL_BIN_SUBDIR=`""$CHPL_HOME""/util/chplenv/chpl_bin_subdir.py`
I recieved the following error message:
(its actually the last part of the Traceback)
File ""/home/kurt/chapel-1.25.1/util/chplenv/chpl_compiler.py"", line 6, in
from distutils.spawn import find_executable
ModuleNotFoundError: No module named 'distutils.spawn'
Any suggestions?
",1,installation issue for version i was attempting to install version when i tried updating the path using the following command chpl bin subdir chpl home util chplenv chpl bin subdir py i recieved the following error message its actually the last part of the traceback file home kurt chapel util chplenv chpl compiler py line in from distutils spawn import find executable modulenotfounderror no module named distutils spawn any suggestions ,1
1242,16538891062.0,IssuesEvent,2021-05-27 14:37:11,openwall/john,https://api.github.com/repos/openwall/john,reopened,Consider working around Git on Windows' use of CRLF vs. bash scripts,portability,"When I try to run ./configure on the /john/src file in Cygwin on my Windows 10 device, I get the following error:
$ ./configure
./configure: line 16: $'\r': command not found
./configure: line 31: syntax error near unexpected token `newline'
'/configure: line 31: ` ;;
When I looked at the /src file I found this disclaimer starting on line 17211:
# On cygwin, bash can eat \r inside `` if the user requested igncr.
# But we know of no other shell where ac_cr would be empty at this
# point, so we can use a bashism as a fallback.
Is there a workaround I can use?",True,"Consider working around Git on Windows' use of CRLF vs. bash scripts - When I try to run ./configure on the /john/src file in Cygwin on my Windows 10 device, I get the following error:
$ ./configure
./configure: line 16: $'\r': command not found
./configure: line 31: syntax error near unexpected token `newline'
'/configure: line 31: ` ;;
When I looked at the /src file I found this disclaimer starting on line 17211:
# On cygwin, bash can eat \r inside `` if the user requested igncr.
# But we know of no other shell where ac_cr would be empty at this
# point, so we can use a bashism as a fallback.
Is there a workaround I can use?",1,consider working around git on windows use of crlf vs bash scripts when i try to run configure on the john src file in cygwin on my windows device i get the following error configure configure line r command not found configure line syntax error near unexpected token newline configure line when i looked at the src file i found this disclaimer starting on line on cygwin bash can eat r inside if the user requested igncr but we know of no other shell where ac cr would be empty at this point so we can use a bashism as a fallback is there a workaround i can use ,1
61839,14642853956.0,IssuesEvent,2020-12-25 13:17:18,fu1771695yongxie/freeCodeCamp,https://api.github.com/repos/fu1771695yongxie/freeCodeCamp,opened,CVE-2012-6708 (Medium) detected in multiple libraries,security vulnerability,"## CVE-2012-6708 - Medium Severity Vulnerability
Vulnerable Libraries - jquery-1.7.2.min.js , jquery-1.3.2.min.js , jquery-1.7.1.min.js
jquery-1.7.2.min.js
JavaScript library for DOM operations
Library home page: https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.2/jquery.min.js
Path to dependency file: freeCodeCamp/api-server/node_modules/jmespath/index.html
Path to vulnerable library: freeCodeCamp/api-server/node_modules/jmespath/index.html
Dependency Hierarchy:
- :x: **jquery-1.7.2.min.js** (Vulnerable Library)
jquery-1.3.2.min.js
JavaScript library for DOM operations
Library home page: https://cdnjs.cloudflare.com/ajax/libs/jquery/1.3.2/jquery.min.js
Path to dependency file: freeCodeCamp/tools/contributor/lib/node_modules/underscore.string/test/test_underscore/temp_tests.html
Path to vulnerable library: freeCodeCamp/tools/contributor/lib/node_modules/underscore.string/test/test_underscore/vendor/jquery.js
Dependency Hierarchy:
- :x: **jquery-1.3.2.min.js** (Vulnerable Library)
jquery-1.7.1.min.js
JavaScript library for DOM operations
Library home page: https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js
Path to dependency file: freeCodeCamp/tools/contributor/dashboard-app/client/node_modules/sockjs/examples/multiplex/index.html
Path to vulnerable library: freeCodeCamp/tools/contributor/dashboard-app/client/node_modules/sockjs/examples/multiplex/index.html,freeCodeCamp/tools/contributor/dashboard-app/client/node_modules/sockjs/examples/echo/index.html,freeCodeCamp/tools/contributor/dashboard-app/client/node_modules/sockjs/examples/express-3.x/index.html,freeCodeCamp/tools/contributor/dashboard-app/client/node_modules/sockjs/examples/hapi/html/index.html
Dependency Hierarchy:
- :x: **jquery-1.7.1.min.js** (Vulnerable Library)
Found in HEAD commit: 311e89b65c48c9468ebef29fa6a623b9e24a3093
Found in base branch: master
Vulnerability Details
jQuery before 1.9.0 is vulnerable to Cross-site Scripting (XSS) attacks. The jQuery(strInput) function does not differentiate selectors from HTML in a reliable fashion. In vulnerable versions, jQuery determined whether the input was HTML by looking for the '<' character anywhere in the string, giving attackers more flexibility when attempting to construct a malicious payload. In fixed versions, jQuery only deems the input to be HTML if it explicitly starts with the '<' character, limiting exploitability only to attackers who can control the beginning of a string, which is far less common.
Publish Date: 2018-01-18
URL: CVE-2012-6708
CVSS 3 Score Details (6.1 )
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
For more information on CVSS3 Scores, click here .
Suggested Fix
Type: Upgrade version
Origin: https://nvd.nist.gov/vuln/detail/CVE-2012-6708
Release Date: 2018-01-18
Fix Resolution: jQuery - v1.9.0
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2012-6708 (Medium) detected in multiple libraries - ## CVE-2012-6708 - Medium Severity Vulnerability
Vulnerable Libraries - jquery-1.7.2.min.js , jquery-1.3.2.min.js , jquery-1.7.1.min.js
jquery-1.7.2.min.js
JavaScript library for DOM operations
Library home page: https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.2/jquery.min.js
Path to dependency file: freeCodeCamp/api-server/node_modules/jmespath/index.html
Path to vulnerable library: freeCodeCamp/api-server/node_modules/jmespath/index.html
Dependency Hierarchy:
- :x: **jquery-1.7.2.min.js** (Vulnerable Library)
jquery-1.3.2.min.js
JavaScript library for DOM operations
Library home page: https://cdnjs.cloudflare.com/ajax/libs/jquery/1.3.2/jquery.min.js
Path to dependency file: freeCodeCamp/tools/contributor/lib/node_modules/underscore.string/test/test_underscore/temp_tests.html
Path to vulnerable library: freeCodeCamp/tools/contributor/lib/node_modules/underscore.string/test/test_underscore/vendor/jquery.js
Dependency Hierarchy:
- :x: **jquery-1.3.2.min.js** (Vulnerable Library)
jquery-1.7.1.min.js
JavaScript library for DOM operations
Library home page: https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js
Path to dependency file: freeCodeCamp/tools/contributor/dashboard-app/client/node_modules/sockjs/examples/multiplex/index.html
Path to vulnerable library: freeCodeCamp/tools/contributor/dashboard-app/client/node_modules/sockjs/examples/multiplex/index.html,freeCodeCamp/tools/contributor/dashboard-app/client/node_modules/sockjs/examples/echo/index.html,freeCodeCamp/tools/contributor/dashboard-app/client/node_modules/sockjs/examples/express-3.x/index.html,freeCodeCamp/tools/contributor/dashboard-app/client/node_modules/sockjs/examples/hapi/html/index.html
Dependency Hierarchy:
- :x: **jquery-1.7.1.min.js** (Vulnerable Library)
Found in HEAD commit: 311e89b65c48c9468ebef29fa6a623b9e24a3093
Found in base branch: master
Vulnerability Details
jQuery before 1.9.0 is vulnerable to Cross-site Scripting (XSS) attacks. The jQuery(strInput) function does not differentiate selectors from HTML in a reliable fashion. In vulnerable versions, jQuery determined whether the input was HTML by looking for the '<' character anywhere in the string, giving attackers more flexibility when attempting to construct a malicious payload. In fixed versions, jQuery only deems the input to be HTML if it explicitly starts with the '<' character, limiting exploitability only to attackers who can control the beginning of a string, which is far less common.
Publish Date: 2018-01-18
URL: CVE-2012-6708
CVSS 3 Score Details (6.1 )
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
For more information on CVSS3 Scores, click here .
Suggested Fix
Type: Upgrade version
Origin: https://nvd.nist.gov/vuln/detail/CVE-2012-6708
Release Date: 2018-01-18
Fix Resolution: jQuery - v1.9.0
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve medium detected in multiple libraries cve medium severity vulnerability vulnerable libraries jquery min js jquery min js jquery min js jquery min js javascript library for dom operations library home page a href path to dependency file freecodecamp api server node modules jmespath index html path to vulnerable library freecodecamp api server node modules jmespath index html dependency hierarchy x jquery min js vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file freecodecamp tools contributor lib node modules underscore string test test underscore temp tests html path to vulnerable library freecodecamp tools contributor lib node modules underscore string test test underscore vendor jquery js dependency hierarchy x jquery min js vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file freecodecamp tools contributor dashboard app client node modules sockjs examples multiplex index html path to vulnerable library freecodecamp tools contributor dashboard app client node modules sockjs examples multiplex index html freecodecamp tools contributor dashboard app client node modules sockjs examples echo index html freecodecamp tools contributor dashboard app client node modules sockjs examples express x index html freecodecamp tools contributor dashboard app client node modules sockjs examples hapi html index html dependency hierarchy x jquery min js vulnerable library found in head commit a href found in base branch master vulnerability details jquery before is vulnerable to cross site scripting xss attacks the jquery strinput function does not differentiate selectors from html in a reliable fashion in vulnerable versions jquery determined whether the input was html by looking for the character anywhere in the string giving attackers more flexibility when attempting to construct a malicious payload in fixed versions jquery only deems the input to be html if it explicitly starts with the character limiting exploitability only to attackers who can control the beginning of a string which is far less common publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery step up your open source security game with whitesource ,0
161864,25412225406.0,IssuesEvent,2022-11-22 20:04:23,MetaMask/metamask-extension,https://api.github.com/repos/MetaMask/metamask-extension,opened,🧹 [UI House Keeping] `TextField`,area-UI design-system IA/NAV,"### Description
Ensure that `TextField` adheres to all of the following conventions and standards
- [ ] Has a `className` prop and the PropType descriptions are all the same
- [ ] Prop table in MDX docs have the ""Accepts all Box component props"" description and link
- [ ] We are consistent when using the same prop names like `size` and are suggesting the use of the generalized `design-system.js` constants e.g. `SIZES` as the primary option but noting the component consts in the documentation and using them for propType validation and storybook controls only
- [ ] Standardize all similar prop names for images `imgSrc`, `imgAlt`(html element + attribute) (needs audit)
- [ ] We have a story for each component prop and we use the prop name verbatim e.g. `size` prop would be `export const Size = (args) => (`
- [ ] We have the accompanying documentation for each component prop and we use the prop name verbatim e.g. `size` prop would be `### Size`
- [ ] Are multiple props stories allowed? e.g. `Color, Background Color And Border Color` story in `base-avatar` - [ ] yes when it makes sense to
- [ ] All Base components follow the suffix convention e.g. `ButtonBase`
- [ ] All Base component MDX documentation have the base component notification at the top
- [ ] Add `mm-` prefix to all classNames
- [ ] className is kebab case version of the component name
- [ ] Spread base components props and reduce duplication of props when props aren't being changed and remain the same for both variant and base components
- [ ] Add component to root `index.js` file in component-library
- [ ] Add locals for any default text I18nContext as default context
- [ ] Add any ""to dos"" with a `// TODO:` comment so we can search for them at a later date e.g. blocking components etc
- [ ] Add snapshot testing
- [ ] Add pixel values to propType descriptions if we use abstracted prop types that relate to pixel values e.g. `SIZE.MD (32px)`
- [ ] Each prop section in the MDX docs should have: a heading, a description, a story and an example code snipped",1.0,"🧹 [UI House Keeping] `TextField` - ### Description
Ensure that `TextField` adheres to all of the following conventions and standards
- [ ] Has a `className` prop and the PropType descriptions are all the same
- [ ] Prop table in MDX docs have the ""Accepts all Box component props"" description and link
- [ ] We are consistent when using the same prop names like `size` and are suggesting the use of the generalized `design-system.js` constants e.g. `SIZES` as the primary option but noting the component consts in the documentation and using them for propType validation and storybook controls only
- [ ] Standardize all similar prop names for images `imgSrc`, `imgAlt`(html element + attribute) (needs audit)
- [ ] We have a story for each component prop and we use the prop name verbatim e.g. `size` prop would be `export const Size = (args) => (`
- [ ] We have the accompanying documentation for each component prop and we use the prop name verbatim e.g. `size` prop would be `### Size`
- [ ] Are multiple props stories allowed? e.g. `Color, Background Color And Border Color` story in `base-avatar` - [ ] yes when it makes sense to
- [ ] All Base components follow the suffix convention e.g. `ButtonBase`
- [ ] All Base component MDX documentation have the base component notification at the top
- [ ] Add `mm-` prefix to all classNames
- [ ] className is kebab case version of the component name
- [ ] Spread base components props and reduce duplication of props when props aren't being changed and remain the same for both variant and base components
- [ ] Add component to root `index.js` file in component-library
- [ ] Add locals for any default text I18nContext as default context
- [ ] Add any ""to dos"" with a `// TODO:` comment so we can search for them at a later date e.g. blocking components etc
- [ ] Add snapshot testing
- [ ] Add pixel values to propType descriptions if we use abstracted prop types that relate to pixel values e.g. `SIZE.MD (32px)`
- [ ] Each prop section in the MDX docs should have: a heading, a description, a story and an example code snipped",0,🧹 textfield description ensure that textfield adheres to all of the following conventions and standards has a classname prop and the proptype descriptions are all the same prop table in mdx docs have the accepts all box component props description and link we are consistent when using the same prop names like size and are suggesting the use of the generalized design system js constants e g sizes as the primary option but noting the component consts in the documentation and using them for proptype validation and storybook controls only standardize all similar prop names for images imgsrc imgalt html element attribute needs audit we have a story for each component prop and we use the prop name verbatim e g size prop would be export const size args we have the accompanying documentation for each component prop and we use the prop name verbatim e g size prop would be size are multiple props stories allowed e g color background color and border color story in base avatar yes when it makes sense to all base components follow the suffix convention e g buttonbase all base component mdx documentation have the base component notification at the top add mm prefix to all classnames classname is kebab case version of the component name spread base components props and reduce duplication of props when props aren t being changed and remain the same for both variant and base components add component to root index js file in component library add locals for any default text as default context add any to dos with a todo comment so we can search for them at a later date e g blocking components etc add snapshot testing add pixel values to proptype descriptions if we use abstracted prop types that relate to pixel values e g size md each prop section in the mdx docs should have a heading a description a story and an example code snipped,0
18061,24068081225.0,IssuesEvent,2022-09-17 19:41:39,lynnandtonic/nestflix.fun,https://api.github.com/repos/lynnandtonic/nestflix.fun,closed,"Add The Corpse Danced at Midnight from ""Murder, She Wrote"" (Screenshots and Poster Added)",suggested title in process,"Please add as much of the following info as you can:
Title: ""The Corpse Danced at Midnight""
Type (film/tv show): film: 80's thriller
Film or show in which it appears: Murder, She Wrote
Is the parent film/show streaming anywhere? Yes - Peacock
About when in the parent film/show does it appear? Ep. 1x05 - ""Hooray for Homicide""
Actual footage of the film/show can be seen (yes/no)? Yes
Timestamp: 27:20 - 27:57
Synopsis: Based on the 1984 bestselling murder-mystery by J. B. Fletcher
Producer: Ross Haley (originally Jerry Lydecker with Lydecker Productions)
Director: Ross Haley
Writer: Allan Gebhart
Cast: Eve Crystal, Scott Bennett
""Fun"" Fact : The ""gore and sex-filled"" film was never finished due to the murder of original producer, Jerry Lydecker, by the film's leading lady, Eve Crystal, and the subsequent cover-up by director, Ross Haley.
",1.0,"Add The Corpse Danced at Midnight from ""Murder, She Wrote"" (Screenshots and Poster Added) - Please add as much of the following info as you can:
Title: ""The Corpse Danced at Midnight""
Type (film/tv show): film: 80's thriller
Film or show in which it appears: Murder, She Wrote
Is the parent film/show streaming anywhere? Yes - Peacock
About when in the parent film/show does it appear? Ep. 1x05 - ""Hooray for Homicide""
Actual footage of the film/show can be seen (yes/no)? Yes
Timestamp: 27:20 - 27:57
Synopsis: Based on the 1984 bestselling murder-mystery by J. B. Fletcher
Producer: Ross Haley (originally Jerry Lydecker with Lydecker Productions)
Director: Ross Haley
Writer: Allan Gebhart
Cast: Eve Crystal, Scott Bennett
""Fun"" Fact : The ""gore and sex-filled"" film was never finished due to the murder of original producer, Jerry Lydecker, by the film's leading lady, Eve Crystal, and the subsequent cover-up by director, Ross Haley.
",0,add the corpse danced at midnight from murder she wrote screenshots and poster added please add as much of the following info as you can title the corpse danced at midnight type film tv show film s thriller film or show in which it appears murder she wrote is the parent film show streaming anywhere yes peacock about when in the parent film show does it appear ep hooray for homicide actual footage of the film show can be seen yes no yes timestamp synopsis based on the bestselling murder mystery by j b fletcher producer ross haley originally jerry lydecker with lydecker productions director ross haley writer allan gebhart cast eve crystal scott bennett fun fact the gore and sex filled film was never finished due to the murder of original producer jerry lydecker by the film s leading lady eve crystal and the subsequent cover up by director ross haley ,0
703,9546480058.0,IssuesEvent,2019-05-01 20:04:16,Azure/azure-functions-host,https://api.github.com/repos/Azure/azure-functions-host,closed,Log function metadata at host startup for better ScaleController correllation,P0 Supportability,"Right now the ScaleController logs trigger details in JSON whenever it syncs a trigger. It's difficult to correlate that to what the host sees because we don't explicitly write any of these details in the host logs. If we wrote a very similar JSON string out for each trigger we have at host startup (and perhaps all function metadata), it would make it much easier to write a detector and alert customers when they need to run ""sync triggers"". ",True,"Log function metadata at host startup for better ScaleController correllation - Right now the ScaleController logs trigger details in JSON whenever it syncs a trigger. It's difficult to correlate that to what the host sees because we don't explicitly write any of these details in the host logs. If we wrote a very similar JSON string out for each trigger we have at host startup (and perhaps all function metadata), it would make it much easier to write a detector and alert customers when they need to run ""sync triggers"". ",1,log function metadata at host startup for better scalecontroller correllation right now the scalecontroller logs trigger details in json whenever it syncs a trigger it s difficult to correlate that to what the host sees because we don t explicitly write any of these details in the host logs if we wrote a very similar json string out for each trigger we have at host startup and perhaps all function metadata it would make it much easier to write a detector and alert customers when they need to run sync triggers ,1
1545,22326206082.0,IssuesEvent,2022-06-14 10:51:15,bcpierce00/unison,https://api.github.com/repos/bcpierce00/unison,closed,make NATIVE=false fails,portability,"I have a machine where I was unable to get ocamlopt working, so I did this:
make UISTYLE=text NATIVE=false
The result was:
/usr/bin/ld: osxsupport.o: in function `getFileInfos':
/home/bcrowell/c/unison/src/osxsupport.c:83: undefined reference to `unix_error'
/usr/bin/ld: osxsupport.o: in function `setFileInfos':
/home/bcrowell/c/unison/src/osxsupport.c:134: undefined reference to `unix_error'
",True,"make NATIVE=false fails - I have a machine where I was unable to get ocamlopt working, so I did this:
make UISTYLE=text NATIVE=false
The result was:
/usr/bin/ld: osxsupport.o: in function `getFileInfos':
/home/bcrowell/c/unison/src/osxsupport.c:83: undefined reference to `unix_error'
/usr/bin/ld: osxsupport.o: in function `setFileInfos':
/home/bcrowell/c/unison/src/osxsupport.c:134: undefined reference to `unix_error'
",1,make native false fails i have a machine where i was unable to get ocamlopt working so i did this make uistyle text native false the result was usr bin ld osxsupport o in function getfileinfos home bcrowell c unison src osxsupport c undefined reference to unix error usr bin ld osxsupport o in function setfileinfos home bcrowell c unison src osxsupport c undefined reference to unix error ,1
98323,29735429824.0,IssuesEvent,2023-06-14 00:10:27,jqlang/jq,https://api.github.com/repos/jqlang/jq,closed,Make fails on Ubuntu 18.04,build,"`./.libs/libjq.a(builtin.o): In function `f_pow10':
/home/daadmin/StdUtils/jq/src/libm.h:253: undefined reference to `pow10'
collect2: error: ld returned 1 exit status
Makefile:984: recipe for target 'jq' failed
make[2]: *** [jq] Error 1
make[2]: Leaving directory '/home/daadmin/StdUtils/jq'
Makefile:1148: recipe for target 'all-recursive' failed
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory '/home/daadmin/StdUtils/jq'
Makefile:825: recipe for target 'all' failed
make: *** [all] Error 2
`
Is this the same issue as #1565 ??",1.0,"Make fails on Ubuntu 18.04 - `./.libs/libjq.a(builtin.o): In function `f_pow10':
/home/daadmin/StdUtils/jq/src/libm.h:253: undefined reference to `pow10'
collect2: error: ld returned 1 exit status
Makefile:984: recipe for target 'jq' failed
make[2]: *** [jq] Error 1
make[2]: Leaving directory '/home/daadmin/StdUtils/jq'
Makefile:1148: recipe for target 'all-recursive' failed
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory '/home/daadmin/StdUtils/jq'
Makefile:825: recipe for target 'all' failed
make: *** [all] Error 2
`
Is this the same issue as #1565 ??",0,make fails on ubuntu libs libjq a builtin o in function f home daadmin stdutils jq src libm h undefined reference to error ld returned exit status makefile recipe for target jq failed make error make leaving directory home daadmin stdutils jq makefile recipe for target all recursive failed make error make leaving directory home daadmin stdutils jq makefile recipe for target all failed make error is this the same issue as ,0
1433,21640795185.0,IssuesEvent,2022-05-05 18:33:43,damccorm/test-migration-target,https://api.github.com/repos/damccorm/test-migration-target,opened,Improve the side input materialization for the DirectRunner/ULR from iterable to storing the multimap directly,P3 improvement runner-direct portability,"https://github.com/apache/beam/pull/4011 migrated to using a multimap as the materialization format for side inputs.
The migration used a trivial multimap \-\> iterable \-\> multimap conversion within the DirectRunner for first pass implementation purposes. Note that this is no different then the current materialization from a performance perspective it just moves this logic within the purview of the runner.
Imported from Jira [BEAM-3080](https://issues.apache.org/jira/browse/BEAM-3080). Original Jira may contain additional context.
Reported by: lcwik.",True,"Improve the side input materialization for the DirectRunner/ULR from iterable to storing the multimap directly - https://github.com/apache/beam/pull/4011 migrated to using a multimap as the materialization format for side inputs.
The migration used a trivial multimap \-\> iterable \-\> multimap conversion within the DirectRunner for first pass implementation purposes. Note that this is no different then the current materialization from a performance perspective it just moves this logic within the purview of the runner.
Imported from Jira [BEAM-3080](https://issues.apache.org/jira/browse/BEAM-3080). Original Jira may contain additional context.
Reported by: lcwik.",1,improve the side input materialization for the directrunner ulr from iterable to storing the multimap directly migrated to using a multimap as the materialization format for side inputs the migration used a trivial multimap iterable multimap conversion within the directrunner for first pass implementation purposes note that this is no different then the current materialization from a performance perspective it just moves this logic within the purview of the runner imported from jira original jira may contain additional context reported by lcwik ,1
1652,23796593592.0,IssuesEvent,2022-09-02 20:32:30,golang/vulndb,https://api.github.com/repos/golang/vulndb,closed,x/vulndb: potential Go vuln in github.com/zitadel/zitadel: CVE-2022-36051,excluded: NOT_IMPORTABLE,"CVE-2022-36051 references [github.com/zitadel/zitadel](https://github.com/zitadel/zitadel), which may be a Go module.
Description:
ZITADEL combines the ease of Auth0 and the versatility of Keycloak.**Actions**, introduced in ZITADEL **1.42.0** on the API and **1.56.0** for Console, is a feature, where users with role.`ORG_OWNER` are able to create Javascript Code, which is invoked by the system at certain points during the login. **Actions**, for example, allow creating authorizations (user grants) on newly created users programmatically. Due to a missing authorization check, **Actions** were able to grant authorizations for projects that belong to other organizations inside the same Instance. Granting authorizations via API and Console is not affected by this vulnerability. There is currently no known workaround, users should update.
References:
- NIST: https://nvd.nist.gov/vuln/detail/CVE-2022-36051
- JSON: https://github.com/CVEProject/cvelist/tree/33138126b6cf9be5834bbcd5b2c6a82d76e8c905/2022/36xxx/CVE-2022-36051.json
- web: https://github.com/zitadel/zitadel/security/advisories/GHSA-c8fj-4pm8-mp2c
- web: https://github.com/zitadel/zitadel/releases/tag/v1.87.1
- web: https://github.com/zitadel/zitadel/releases/tag/v2.2.0
- Imported by: https://pkg.go.dev/github.com/zitadel/zitadel?tab=importedby
See [doc/triage.md](https://github.com/golang/vulndb/blob/master/doc/triage.md) for instructions on how to triage this report.
```
modules:
- module: github.com/zitadel/zitadel
packages:
- package: zitadel
description: |+
ZITADEL combines the ease of Auth0 and the versatility of Keycloak.**Actions**, introduced in ZITADEL **1.42.0** on the API and **1.56.0** for Console, is a feature, where users with role.`ORG_OWNER` are able to create Javascript Code, which is invoked by the system at certain points during the login. **Actions**, for example, allow creating authorizations (user grants) on newly created users programmatically. Due to a missing authorization check, **Actions** were able to grant authorizations for projects that belong to other organizations inside the same Instance. Granting authorizations via API and Console is not affected by this vulnerability. There is currently no known workaround, users should update.
cves:
- CVE-2022-36051
references:
- web: https://github.com/zitadel/zitadel/security/advisories/GHSA-c8fj-4pm8-mp2c
- web: https://github.com/zitadel/zitadel/releases/tag/v1.87.1
- web: https://github.com/zitadel/zitadel/releases/tag/v2.2.0
```",True,"x/vulndb: potential Go vuln in github.com/zitadel/zitadel: CVE-2022-36051 - CVE-2022-36051 references [github.com/zitadel/zitadel](https://github.com/zitadel/zitadel), which may be a Go module.
Description:
ZITADEL combines the ease of Auth0 and the versatility of Keycloak.**Actions**, introduced in ZITADEL **1.42.0** on the API and **1.56.0** for Console, is a feature, where users with role.`ORG_OWNER` are able to create Javascript Code, which is invoked by the system at certain points during the login. **Actions**, for example, allow creating authorizations (user grants) on newly created users programmatically. Due to a missing authorization check, **Actions** were able to grant authorizations for projects that belong to other organizations inside the same Instance. Granting authorizations via API and Console is not affected by this vulnerability. There is currently no known workaround, users should update.
References:
- NIST: https://nvd.nist.gov/vuln/detail/CVE-2022-36051
- JSON: https://github.com/CVEProject/cvelist/tree/33138126b6cf9be5834bbcd5b2c6a82d76e8c905/2022/36xxx/CVE-2022-36051.json
- web: https://github.com/zitadel/zitadel/security/advisories/GHSA-c8fj-4pm8-mp2c
- web: https://github.com/zitadel/zitadel/releases/tag/v1.87.1
- web: https://github.com/zitadel/zitadel/releases/tag/v2.2.0
- Imported by: https://pkg.go.dev/github.com/zitadel/zitadel?tab=importedby
See [doc/triage.md](https://github.com/golang/vulndb/blob/master/doc/triage.md) for instructions on how to triage this report.
```
modules:
- module: github.com/zitadel/zitadel
packages:
- package: zitadel
description: |+
ZITADEL combines the ease of Auth0 and the versatility of Keycloak.**Actions**, introduced in ZITADEL **1.42.0** on the API and **1.56.0** for Console, is a feature, where users with role.`ORG_OWNER` are able to create Javascript Code, which is invoked by the system at certain points during the login. **Actions**, for example, allow creating authorizations (user grants) on newly created users programmatically. Due to a missing authorization check, **Actions** were able to grant authorizations for projects that belong to other organizations inside the same Instance. Granting authorizations via API and Console is not affected by this vulnerability. There is currently no known workaround, users should update.
cves:
- CVE-2022-36051
references:
- web: https://github.com/zitadel/zitadel/security/advisories/GHSA-c8fj-4pm8-mp2c
- web: https://github.com/zitadel/zitadel/releases/tag/v1.87.1
- web: https://github.com/zitadel/zitadel/releases/tag/v2.2.0
```",1,x vulndb potential go vuln in github com zitadel zitadel cve cve references which may be a go module description zitadel combines the ease of and the versatility of keycloak actions introduced in zitadel on the api and for console is a feature where users with role org owner are able to create javascript code which is invoked by the system at certain points during the login actions for example allow creating authorizations user grants on newly created users programmatically due to a missing authorization check actions were able to grant authorizations for projects that belong to other organizations inside the same instance granting authorizations via api and console is not affected by this vulnerability there is currently no known workaround users should update references nist json web web web imported by see for instructions on how to triage this report modules module github com zitadel zitadel packages package zitadel description zitadel combines the ease of and the versatility of keycloak actions introduced in zitadel on the api and for console is a feature where users with role org owner are able to create javascript code which is invoked by the system at certain points during the login actions for example allow creating authorizations user grants on newly created users programmatically due to a missing authorization check actions were able to grant authorizations for projects that belong to other organizations inside the same instance granting authorizations via api and console is not affected by this vulnerability there is currently no known workaround users should update cves cve references web web web ,1
1863,27585460403.0,IssuesEvent,2023-03-08 19:23:12,golang/vulndb,https://api.github.com/repos/golang/vulndb,closed,x/vulndb: potential Go vuln in github.com/answerdev/answer: GHSA-vxhr-p2vp-7gf8,excluded: NOT_IMPORTABLE,"In GitHub Security Advisory [GHSA-vxhr-p2vp-7gf8](https://github.com/advisories/GHSA-vxhr-p2vp-7gf8), there is a vulnerability in the following Go packages or modules:
| Unit | Fixed | Vulnerable Ranges |
| - | - | - |
| [github.com/answerdev/answer](https://pkg.go.dev/github.com/answerdev/answer) | 1.0.6 | < 1.0.6 |
Cross references:
- Module github.com/answerdev/answer appears in issue #1541 EFFECTIVELY_PRIVATE
- Module github.com/answerdev/answer appears in issue #1550 NOT_IMPORTABLE
- Module github.com/answerdev/answer appears in issue #1551 NOT_IMPORTABLE
- Module github.com/answerdev/answer appears in issue #1552 EFFECTIVELY_PRIVATE
- Module github.com/answerdev/answer appears in issue #1553 NOT_IMPORTABLE
- Module github.com/answerdev/answer appears in issue #1554 EFFECTIVELY_PRIVATE
- Module github.com/answerdev/answer appears in issue #1592 NOT_IMPORTABLE
See [doc/triage.md](https://github.com/golang/vulndb/blob/master/doc/triage.md) for instructions on how to triage this report.
```
modules:
- module: github.com/answerdev/answer
versions:
- fixed: 1.0.6
packages:
- package: github.com/answerdev/answer
description: Cross-site Scripting (XSS) - Reflected in GitHub repository answerdev/answer
prior to 1.0.6.
cves:
- CVE-2023-1239
ghsas:
- GHSA-vxhr-p2vp-7gf8
references:
- web: https://nvd.nist.gov/vuln/detail/CVE-2023-1239
- fix: https://github.com/answerdev/answer/commit/9870ed87fb24ed468aaf1e169c2d028e0f375106
- web: https://huntr.dev/bounties/3a22c609-d2d8-4613-815d-58f5990b8bd8
- advisory: https://github.com/advisories/GHSA-vxhr-p2vp-7gf8
```",True,"x/vulndb: potential Go vuln in github.com/answerdev/answer: GHSA-vxhr-p2vp-7gf8 - In GitHub Security Advisory [GHSA-vxhr-p2vp-7gf8](https://github.com/advisories/GHSA-vxhr-p2vp-7gf8), there is a vulnerability in the following Go packages or modules:
| Unit | Fixed | Vulnerable Ranges |
| - | - | - |
| [github.com/answerdev/answer](https://pkg.go.dev/github.com/answerdev/answer) | 1.0.6 | < 1.0.6 |
Cross references:
- Module github.com/answerdev/answer appears in issue #1541 EFFECTIVELY_PRIVATE
- Module github.com/answerdev/answer appears in issue #1550 NOT_IMPORTABLE
- Module github.com/answerdev/answer appears in issue #1551 NOT_IMPORTABLE
- Module github.com/answerdev/answer appears in issue #1552 EFFECTIVELY_PRIVATE
- Module github.com/answerdev/answer appears in issue #1553 NOT_IMPORTABLE
- Module github.com/answerdev/answer appears in issue #1554 EFFECTIVELY_PRIVATE
- Module github.com/answerdev/answer appears in issue #1592 NOT_IMPORTABLE
See [doc/triage.md](https://github.com/golang/vulndb/blob/master/doc/triage.md) for instructions on how to triage this report.
```
modules:
- module: github.com/answerdev/answer
versions:
- fixed: 1.0.6
packages:
- package: github.com/answerdev/answer
description: Cross-site Scripting (XSS) - Reflected in GitHub repository answerdev/answer
prior to 1.0.6.
cves:
- CVE-2023-1239
ghsas:
- GHSA-vxhr-p2vp-7gf8
references:
- web: https://nvd.nist.gov/vuln/detail/CVE-2023-1239
- fix: https://github.com/answerdev/answer/commit/9870ed87fb24ed468aaf1e169c2d028e0f375106
- web: https://huntr.dev/bounties/3a22c609-d2d8-4613-815d-58f5990b8bd8
- advisory: https://github.com/advisories/GHSA-vxhr-p2vp-7gf8
```",1,x vulndb potential go vuln in github com answerdev answer ghsa vxhr in github security advisory there is a vulnerability in the following go packages or modules unit fixed vulnerable ranges cross references module github com answerdev answer appears in issue effectively private module github com answerdev answer appears in issue not importable module github com answerdev answer appears in issue not importable module github com answerdev answer appears in issue effectively private module github com answerdev answer appears in issue not importable module github com answerdev answer appears in issue effectively private module github com answerdev answer appears in issue not importable see for instructions on how to triage this report modules module github com answerdev answer versions fixed packages package github com answerdev answer description cross site scripting xss reflected in github repository answerdev answer prior to cves cve ghsas ghsa vxhr references web fix web advisory ,1
1185,15357056283.0,IssuesEvent,2021-03-01 13:13:52,AzureAD/microsoft-authentication-library-for-dotnet,https://api.github.com/repos/AzureAD/microsoft-authentication-library-for-dotnet,closed,[Bug] HoloLens 2 device code flow does not work,P2 Supportability Unity bug workaround exists,"**Logs and Network traces**
`Error setting value to 'TenantDiscoveryEndpoint' on 'Microsoft.Identity.Client.Instance.Discovery.InstanceDiscoveryResponse'. at Microsoft.Identity.Json.Serialization.ExpressionValueProvider.SetValue (System.Object target, System.Object value) [0x00000] in <00000000000000000000000000000000>:0 \r\n at Microsoft.Identity.Json.Serialization.JsonSerializerInternalReader.SetPropertyValue (Microsoft.Identity.Json.Serialization.JsonProperty property, Microsoft.Identity.Json.JsonConverter propertyConverter, Microsoft.Identity.Json.Serialization.JsonContainerContract containerContract, Microsoft.Identity.Json.Serialization.JsonProperty containerProperty, Microsoft.Identity.Json.JsonReader reader, System.Object target) [0x00000] in <00000000000000000000000000000000>:0 \r\n at Microsoft.Identity.Json.Serialization.JsonSerializerInternalReader.PopulateObject (System.Object newObject, Microsoft.Identity.Json.JsonReader reader, Microsoft.Identity.Json.Serialization.JsonObjectContract contract, Microsoft.Identity.Json.Serialization.JsonProperty member, System.String id) [0x00000] in <00000000000000000000000000000000>:0 \r\n at Microsoft.Identity.Json.Serialization.JsonSerializerInternalReader.CreateObject (Microsoft.Identity.Json.JsonReader reader, System.Type objectType, Microsoft.Identity.Json.Serialization.JsonContract contract, Microsoft.Identity.Json.Serialization.JsonProperty member, Microsoft.Identity.Json.Serialization.JsonContainerContract containerContract, Microsoft.Identity.Json.Serialization.JsonProperty containerMember, System.Object existingValue) [0x00000] in <00000000000000000000000000000000>:0 \r\n at Microsoft.Identity.Json.Serialization.JsonSerializerInternalReader.CreateValueInternal (Microsoft.Identity.Json.JsonReader reader, System.Type objectType, Microsoft.Identity.Json.Serialization.JsonContract contract, Microsoft.Identity.Json.Serialization.JsonProperty member, Microsoft.Identity.Json.Serialization.JsonContainerContract containerContract, Microsoft.Identity.Json.Serialization.JsonProperty containerMember, System.Object existingValue) [0x00000] in <00000000000000000000000000000000>:0 \r\n at Microsoft.Identity.Json.Serialization.JsonSerializerInternalReader.Deserialize (Microsoft.Identity.Json.JsonReader reader, System.Type objectType, System.Boolean checkAdditionalContent) [0x00000] in <00000000000000000000000000000000>:0 \r\n at Microsoft.Identity.Json.JsonSerializer.DeserializeInternal (Microsoft.Identity.Json.JsonReader reader, System.Type objectType) [0x00000] in <00000000000000000000000000000000>:0 \r\n at Microsoft.Identity.Json.JsonConvert.DeserializeObject (System.String value, System.Type type, Microsoft.Identity.Json.JsonSerializerSettings settings) [0x00000] in <00000000000000000000000000000000>:0 \r\n at Microsoft.Identity.Json.JsonConvert.DeserializeObject[T] (System.String value, Microsoft.Identity.Json.JsonSerializerSettings settings) [0x00000] in <00000000000000000000000000000000>:0 \r\n at Microsoft.Identity.Json.JsonConvert.DeserializeObject[T] (System.String value) [0x00000] in <00000000000000000000000000000000>:0 \r\n at Microsoft.Identity.Client.Utils.JsonHelper.DeserializeFromJson[T] (System.String json) [0x00000] in <00000000000000000000000000000000>:0 \r\n at Microsoft.Identity.Client.OAuth2.OAuth2Client.CreateResponse[T] (Microsoft.Identity.Client.Http.HttpResponse response, Microsoft.Identity.Client.Internal.RequestContext requestContext) [0x00000] in <00000000000000000000000000000000>:0 \r\n at Microsoft.Identity.Json.Linq.Extensions+d__14`2[T,U].<>m__Finally1 () [0x00000] in <00000000000000000000000000000000>:0 \r\n at System.Runtime.CompilerServices.AsyncMethodBuilderCore+MoveNextRunner.InvokeMoveNext (System.Object stateMachine) [0x00000]`
**Which Version of MSAL are you using ?**
4.22, built from the current MSAL master branch.
**Platform**
Unity 2019.4.0f1, UWP, IL2CPP, ARM for HoloLens 2
**What authentication flow has the issue?**
* Desktop / Mobile
* [ ] Interactive
* [ ] Integrated Windows Auth
* [ ] Username Password
* [X] Device code flow (browserless)
* Web App
* [ ] Authorization code
* [ ] OBO
* Daemon App
* [ ] Service to Service calls
Other? - please describe;
**Is this a new or existing app?**
We were using some version of MSAL v3, but application in production started having issues with authenticating private accounts. After long support session we learned that we should change a little our flow, this required update to MSAL v4.
Unfortunately, while everything works fine in the editor, version deployed on the HoloLens does not.
**Repro**
I am attaching whole script that contains our logic for device code.
Controller script only calls SignInWithDeviceFlow() method and this method fails on AcquireToken.
[DeviceCodeAuthenticator.txt](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/files/5582452/DeviceCodeAuthenticator.txt)
**Expected behavior**
Using AcquireTokenWithDeviceCode should give code to authenticate user.
**Actual behavior**
Exception is being thrown and no code is given.
**Possible Solution**
This issue appeared in MSAL v3 we were using previously and link.xml fixed it. In the new version of MSAL, this fix does not work.
I have turned panic mode on and basically put all possible options in link.xml, still does not work. Here is the content:
```
```
**Additional context/ Logs / Screenshots**
I have tried to cheat Unity bytestripping, by pasting the library into the build directory and then deploying it to the HoloLens, but it changes nothing.
I have tried using several versions of MSAL:
- 4.7.1
- 4.22 built from master branch
- 4.22 downloaded from NuGet and then copied over to Unity (only ARM version)
",True,"[Bug] HoloLens 2 device code flow does not work - **Logs and Network traces**
`Error setting value to 'TenantDiscoveryEndpoint' on 'Microsoft.Identity.Client.Instance.Discovery.InstanceDiscoveryResponse'. at Microsoft.Identity.Json.Serialization.ExpressionValueProvider.SetValue (System.Object target, System.Object value) [0x00000] in <00000000000000000000000000000000>:0 \r\n at Microsoft.Identity.Json.Serialization.JsonSerializerInternalReader.SetPropertyValue (Microsoft.Identity.Json.Serialization.JsonProperty property, Microsoft.Identity.Json.JsonConverter propertyConverter, Microsoft.Identity.Json.Serialization.JsonContainerContract containerContract, Microsoft.Identity.Json.Serialization.JsonProperty containerProperty, Microsoft.Identity.Json.JsonReader reader, System.Object target) [0x00000] in <00000000000000000000000000000000>:0 \r\n at Microsoft.Identity.Json.Serialization.JsonSerializerInternalReader.PopulateObject (System.Object newObject, Microsoft.Identity.Json.JsonReader reader, Microsoft.Identity.Json.Serialization.JsonObjectContract contract, Microsoft.Identity.Json.Serialization.JsonProperty member, System.String id) [0x00000] in <00000000000000000000000000000000>:0 \r\n at Microsoft.Identity.Json.Serialization.JsonSerializerInternalReader.CreateObject (Microsoft.Identity.Json.JsonReader reader, System.Type objectType, Microsoft.Identity.Json.Serialization.JsonContract contract, Microsoft.Identity.Json.Serialization.JsonProperty member, Microsoft.Identity.Json.Serialization.JsonContainerContract containerContract, Microsoft.Identity.Json.Serialization.JsonProperty containerMember, System.Object existingValue) [0x00000] in <00000000000000000000000000000000>:0 \r\n at Microsoft.Identity.Json.Serialization.JsonSerializerInternalReader.CreateValueInternal (Microsoft.Identity.Json.JsonReader reader, System.Type objectType, Microsoft.Identity.Json.Serialization.JsonContract contract, Microsoft.Identity.Json.Serialization.JsonProperty member, Microsoft.Identity.Json.Serialization.JsonContainerContract containerContract, Microsoft.Identity.Json.Serialization.JsonProperty containerMember, System.Object existingValue) [0x00000] in <00000000000000000000000000000000>:0 \r\n at Microsoft.Identity.Json.Serialization.JsonSerializerInternalReader.Deserialize (Microsoft.Identity.Json.JsonReader reader, System.Type objectType, System.Boolean checkAdditionalContent) [0x00000] in <00000000000000000000000000000000>:0 \r\n at Microsoft.Identity.Json.JsonSerializer.DeserializeInternal (Microsoft.Identity.Json.JsonReader reader, System.Type objectType) [0x00000] in <00000000000000000000000000000000>:0 \r\n at Microsoft.Identity.Json.JsonConvert.DeserializeObject (System.String value, System.Type type, Microsoft.Identity.Json.JsonSerializerSettings settings) [0x00000] in <00000000000000000000000000000000>:0 \r\n at Microsoft.Identity.Json.JsonConvert.DeserializeObject[T] (System.String value, Microsoft.Identity.Json.JsonSerializerSettings settings) [0x00000] in <00000000000000000000000000000000>:0 \r\n at Microsoft.Identity.Json.JsonConvert.DeserializeObject[T] (System.String value) [0x00000] in <00000000000000000000000000000000>:0 \r\n at Microsoft.Identity.Client.Utils.JsonHelper.DeserializeFromJson[T] (System.String json) [0x00000] in <00000000000000000000000000000000>:0 \r\n at Microsoft.Identity.Client.OAuth2.OAuth2Client.CreateResponse[T] (Microsoft.Identity.Client.Http.HttpResponse response, Microsoft.Identity.Client.Internal.RequestContext requestContext) [0x00000] in <00000000000000000000000000000000>:0 \r\n at Microsoft.Identity.Json.Linq.Extensions+d__14`2[T,U].<>m__Finally1 () [0x00000] in <00000000000000000000000000000000>:0 \r\n at System.Runtime.CompilerServices.AsyncMethodBuilderCore+MoveNextRunner.InvokeMoveNext (System.Object stateMachine) [0x00000]`
**Which Version of MSAL are you using ?**
4.22, built from the current MSAL master branch.
**Platform**
Unity 2019.4.0f1, UWP, IL2CPP, ARM for HoloLens 2
**What authentication flow has the issue?**
* Desktop / Mobile
* [ ] Interactive
* [ ] Integrated Windows Auth
* [ ] Username Password
* [X] Device code flow (browserless)
* Web App
* [ ] Authorization code
* [ ] OBO
* Daemon App
* [ ] Service to Service calls
Other? - please describe;
**Is this a new or existing app?**
We were using some version of MSAL v3, but application in production started having issues with authenticating private accounts. After long support session we learned that we should change a little our flow, this required update to MSAL v4.
Unfortunately, while everything works fine in the editor, version deployed on the HoloLens does not.
**Repro**
I am attaching whole script that contains our logic for device code.
Controller script only calls SignInWithDeviceFlow() method and this method fails on AcquireToken.
[DeviceCodeAuthenticator.txt](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/files/5582452/DeviceCodeAuthenticator.txt)
**Expected behavior**
Using AcquireTokenWithDeviceCode should give code to authenticate user.
**Actual behavior**
Exception is being thrown and no code is given.
**Possible Solution**
This issue appeared in MSAL v3 we were using previously and link.xml fixed it. In the new version of MSAL, this fix does not work.
I have turned panic mode on and basically put all possible options in link.xml, still does not work. Here is the content:
```
```
**Additional context/ Logs / Screenshots**
I have tried to cheat Unity bytestripping, by pasting the library into the build directory and then deploying it to the HoloLens, but it changes nothing.
I have tried using several versions of MSAL:
- 4.7.1
- 4.22 built from master branch
- 4.22 downloaded from NuGet and then copied over to Unity (only ARM version)
",1, hololens device code flow does not work logs and network traces error setting value to tenantdiscoveryendpoint on microsoft identity client instance discovery instancediscoveryresponse at microsoft identity json serialization expressionvalueprovider setvalue system object target system object value in r n at microsoft identity json serialization jsonserializerinternalreader setpropertyvalue microsoft identity json serialization jsonproperty property microsoft identity json jsonconverter propertyconverter microsoft identity json serialization jsoncontainercontract containercontract microsoft identity json serialization jsonproperty containerproperty microsoft identity json jsonreader reader system object target in r n at microsoft identity json serialization jsonserializerinternalreader populateobject system object newobject microsoft identity json jsonreader reader microsoft identity json serialization jsonobjectcontract contract microsoft identity json serialization jsonproperty member system string id in r n at microsoft identity json serialization jsonserializerinternalreader createobject microsoft identity json jsonreader reader system type objecttype microsoft identity json serialization jsoncontract contract microsoft identity json serialization jsonproperty member microsoft identity json serialization jsoncontainercontract containercontract microsoft identity json serialization jsonproperty containermember system object existingvalue in r n at microsoft identity json serialization jsonserializerinternalreader createvalueinternal microsoft identity json jsonreader reader system type objecttype microsoft identity json serialization jsoncontract contract microsoft identity json serialization jsonproperty member microsoft identity json serialization jsoncontainercontract containercontract microsoft identity json serialization jsonproperty containermember system object existingvalue in r n at microsoft identity json serialization jsonserializerinternalreader deserialize microsoft identity json jsonreader reader system type objecttype system boolean checkadditionalcontent in r n at microsoft identity json jsonserializer deserializeinternal microsoft identity json jsonreader reader system type objecttype in r n at microsoft identity json jsonconvert deserializeobject system string value system type type microsoft identity json jsonserializersettings settings in r n at microsoft identity json jsonconvert deserializeobject system string value microsoft identity json jsonserializersettings settings in r n at microsoft identity json jsonconvert deserializeobject system string value in r n at microsoft identity client utils jsonhelper deserializefromjson system string json in r n at microsoft identity client createresponse microsoft identity client http httpresponse response microsoft identity client internal requestcontext requestcontext in r n at microsoft identity json linq extensions d m in r n at system runtime compilerservices asyncmethodbuildercore movenextrunner invokemovenext system object statemachine which version of msal are you using built from the current msal master branch platform unity uwp arm for hololens what authentication flow has the issue desktop mobile interactive integrated windows auth username password device code flow browserless web app authorization code obo daemon app service to service calls other please describe is this a new or existing app we were using some version of msal but application in production started having issues with authenticating private accounts after long support session we learned that we should change a little our flow this required update to msal unfortunately while everything works fine in the editor version deployed on the hololens does not repro i am attaching whole script that contains our logic for device code controller script only calls signinwithdeviceflow method and this method fails on acquiretoken expected behavior using acquiretokenwithdevicecode should give code to authenticate user actual behavior exception is being thrown and no code is given possible solution this issue appeared in msal we were using previously and link xml fixed it in the new version of msal this fix does not work i have turned panic mode on and basically put all possible options in link xml still does not work here is the content additional context logs screenshots i have tried to cheat unity bytestripping by pasting the library into the build directory and then deploying it to the hololens but it changes nothing i have tried using several versions of msal built from master branch downloaded from nuget and then copied over to unity only arm version ,1
32604,2756395907.0,IssuesEvent,2015-04-27 08:04:27,UnifiedViews/Core,https://api.github.com/repos/UnifiedViews/Core,closed,Unexpected error when backend is offline,priority: High severity: bug,"1. shut down backend
2. in frontend, create new pipeline
3. add some DPU, make the pipeline valid
4. click Save & Close & Debug
5. click Cancel
6. move some DPU around canvas.
7. unexpected error
```
11:06:57.925 [http-nio-0:0:0:0:0:0:0:1-8080-exec-10] ERROR c.c.m.x.o.f.AppEntry - Uncaught exception
java.lang.IllegalArgumentException: Node with supplied id was not found!
at cz.cuni.mff.xrg.odcs.commons.app.pipeline.graph.PipelineGraph.moveNode(PipelineGraph.java:316) ~[commons-app-1.6.0-SNAPSHOT.jar:na]
at cz.cuni.mff.xrg.odcs.frontend.gui.components.pipelinecanvas.PipelineCanvas.dpuMoved(PipelineCanvas.java:453) ~[PipelineCanvas.class:na]
at cz.cuni.mff.xrg.odcs.frontend.gui.components.pipelinecanvas.PipelineCanvas.access$400(PipelineCanvas.java:48) ~[PipelineCanvas.class:na]
at cz.cuni.mff.xrg.odcs.frontend.gui.components.pipelinecanvas.PipelineCanvas$1.onDpuMoved(PipelineCanvas.java:144) ~[PipelineCanvas$1.class:na]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.7.0_75]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[na:1.7.0_75]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.7.0_75]
at java.lang.reflect.Method.invoke(Method.java:606) ~[na:1.7.0_75]
at com.vaadin.server.ServerRpcManager.applyInvocation(ServerRpcManager.java:168) ~[vaadin-server-7.3.7.jar:7.3.7]
at com.vaadin.server.ServerRpcManager.applyInvocation(ServerRpcManager.java:118) ~[vaadin-server-7.3.7.jar:7.3.7]
at com.vaadin.server.communication.ServerRpcHandler.handleInvocations(ServerRpcHandler.java:287) [vaadin-server-7.3.7.jar:7.3.7]
at com.vaadin.server.communication.ServerRpcHandler.handleRpc(ServerRpcHandler.java:180) [vaadin-server-7.3.7.jar:7.3.7]
at com.vaadin.server.communication.UidlRequestHandler.synchronizedHandleRequest(UidlRequestHandler.java:93) [vaadin-server-7.3.7.jar:7.3.7]
at com.vaadin.server.SynchronizedRequestHandler.handleRequest(SynchronizedRequestHandler.java:41) [vaadin-server-7.3.7.jar:7.3.7]
at com.vaadin.server.VaadinService.handleRequest(VaadinService.java:1406) [vaadin-server-7.3.7.jar:7.3.7]
at com.vaadin.server.VaadinServlet.service(VaadinServlet.java:305) [vaadin-server-7.3.7.jar:7.3.7]
at cz.cuni.mff.xrg.odcs.frontend.ODCSApplicationServlet.service(ODCSApplicationServlet.java:86) [ODCSApplicationServlet.class:na]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:728) [servlet-api.jar:na]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:305) [catalina.jar:7.0.50]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210) [catalina.jar:7.0.50]
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52) [tomcat7-websocket.jar:7.0.50]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243) [catalina.jar:7.0.50]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210) [catalina.jar:7.0.50]
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:222) [catalina.jar:7.0.50]
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:123) [catalina.jar:7.0.50]
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:502) [catalina.jar:7.0.50]
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:171) [catalina.jar:7.0.50]
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:100) [catalina.jar:7.0.50]
at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:953) [catalina.jar:7.0.50]
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118) [catalina.jar:7.0.50]
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:409) [catalina.jar:7.0.50]
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1044) [tomcat-coyote.jar:7.0.50]
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:607) [tomcat-coyote.jar:7.0.50]
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1721) [tomcat-coyote.jar:7.0.50]
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1679) [tomcat-coyote.jar:7.0.50]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_75]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_75]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_75]
```",1.0,"Unexpected error when backend is offline - 1. shut down backend
2. in frontend, create new pipeline
3. add some DPU, make the pipeline valid
4. click Save & Close & Debug
5. click Cancel
6. move some DPU around canvas.
7. unexpected error
```
11:06:57.925 [http-nio-0:0:0:0:0:0:0:1-8080-exec-10] ERROR c.c.m.x.o.f.AppEntry - Uncaught exception
java.lang.IllegalArgumentException: Node with supplied id was not found!
at cz.cuni.mff.xrg.odcs.commons.app.pipeline.graph.PipelineGraph.moveNode(PipelineGraph.java:316) ~[commons-app-1.6.0-SNAPSHOT.jar:na]
at cz.cuni.mff.xrg.odcs.frontend.gui.components.pipelinecanvas.PipelineCanvas.dpuMoved(PipelineCanvas.java:453) ~[PipelineCanvas.class:na]
at cz.cuni.mff.xrg.odcs.frontend.gui.components.pipelinecanvas.PipelineCanvas.access$400(PipelineCanvas.java:48) ~[PipelineCanvas.class:na]
at cz.cuni.mff.xrg.odcs.frontend.gui.components.pipelinecanvas.PipelineCanvas$1.onDpuMoved(PipelineCanvas.java:144) ~[PipelineCanvas$1.class:na]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.7.0_75]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[na:1.7.0_75]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.7.0_75]
at java.lang.reflect.Method.invoke(Method.java:606) ~[na:1.7.0_75]
at com.vaadin.server.ServerRpcManager.applyInvocation(ServerRpcManager.java:168) ~[vaadin-server-7.3.7.jar:7.3.7]
at com.vaadin.server.ServerRpcManager.applyInvocation(ServerRpcManager.java:118) ~[vaadin-server-7.3.7.jar:7.3.7]
at com.vaadin.server.communication.ServerRpcHandler.handleInvocations(ServerRpcHandler.java:287) [vaadin-server-7.3.7.jar:7.3.7]
at com.vaadin.server.communication.ServerRpcHandler.handleRpc(ServerRpcHandler.java:180) [vaadin-server-7.3.7.jar:7.3.7]
at com.vaadin.server.communication.UidlRequestHandler.synchronizedHandleRequest(UidlRequestHandler.java:93) [vaadin-server-7.3.7.jar:7.3.7]
at com.vaadin.server.SynchronizedRequestHandler.handleRequest(SynchronizedRequestHandler.java:41) [vaadin-server-7.3.7.jar:7.3.7]
at com.vaadin.server.VaadinService.handleRequest(VaadinService.java:1406) [vaadin-server-7.3.7.jar:7.3.7]
at com.vaadin.server.VaadinServlet.service(VaadinServlet.java:305) [vaadin-server-7.3.7.jar:7.3.7]
at cz.cuni.mff.xrg.odcs.frontend.ODCSApplicationServlet.service(ODCSApplicationServlet.java:86) [ODCSApplicationServlet.class:na]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:728) [servlet-api.jar:na]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:305) [catalina.jar:7.0.50]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210) [catalina.jar:7.0.50]
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52) [tomcat7-websocket.jar:7.0.50]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243) [catalina.jar:7.0.50]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210) [catalina.jar:7.0.50]
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:222) [catalina.jar:7.0.50]
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:123) [catalina.jar:7.0.50]
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:502) [catalina.jar:7.0.50]
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:171) [catalina.jar:7.0.50]
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:100) [catalina.jar:7.0.50]
at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:953) [catalina.jar:7.0.50]
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118) [catalina.jar:7.0.50]
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:409) [catalina.jar:7.0.50]
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1044) [tomcat-coyote.jar:7.0.50]
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:607) [tomcat-coyote.jar:7.0.50]
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1721) [tomcat-coyote.jar:7.0.50]
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1679) [tomcat-coyote.jar:7.0.50]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_75]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_75]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_75]
```",0,unexpected error when backend is offline shut down backend in frontend create new pipeline add some dpu make the pipeline valid click save close debug click cancel move some dpu around canvas unexpected error error c c m x o f appentry uncaught exception java lang illegalargumentexception node with supplied id was not found at cz cuni mff xrg odcs commons app pipeline graph pipelinegraph movenode pipelinegraph java at cz cuni mff xrg odcs frontend gui components pipelinecanvas pipelinecanvas dpumoved pipelinecanvas java at cz cuni mff xrg odcs frontend gui components pipelinecanvas pipelinecanvas access pipelinecanvas java at cz cuni mff xrg odcs frontend gui components pipelinecanvas pipelinecanvas ondpumoved pipelinecanvas java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at com vaadin server serverrpcmanager applyinvocation serverrpcmanager java at com vaadin server serverrpcmanager applyinvocation serverrpcmanager java at com vaadin server communication serverrpchandler handleinvocations serverrpchandler java at com vaadin server communication serverrpchandler handlerpc serverrpchandler java at com vaadin server communication uidlrequesthandler synchronizedhandlerequest uidlrequesthandler java at com vaadin server synchronizedrequesthandler handlerequest synchronizedrequesthandler java at com vaadin server vaadinservice handlerequest vaadinservice java at com vaadin server vaadinservlet service vaadinservlet java at cz cuni mff xrg odcs frontend odcsapplicationservlet service odcsapplicationservlet java at javax servlet http httpservlet service httpservlet java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org apache tomcat websocket server wsfilter dofilter wsfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org apache catalina core standardwrappervalve invoke standardwrappervalve java at org apache catalina core standardcontextvalve invoke standardcontextvalve java at org apache catalina authenticator authenticatorbase invoke authenticatorbase java at org apache catalina core standardhostvalve invoke standardhostvalve java at org apache catalina valves errorreportvalve invoke errorreportvalve java at org apache catalina valves accesslogvalve invoke accesslogvalve java at org apache catalina core standardenginevalve invoke standardenginevalve java at org apache catalina connector coyoteadapter service coyoteadapter java at org apache coyote process java at org apache coyote abstractprotocol abstractconnectionhandler process abstractprotocol java at org apache tomcat util net nioendpoint socketprocessor dorun nioendpoint java at org apache tomcat util net nioendpoint socketprocessor run nioendpoint java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java ,0
112,3285870996.0,IssuesEvent,2015-10-28 22:32:43,gcotelli/RenoirSt,https://api.github.com/repos/gcotelli/RenoirSt,closed,Improve test portability,enhancement portability,Use the built-in function for tabs and line delimiters in the expandMacros* methods instead of OSPlatform current lineDelimiter to easy the migration to other Smalltalk implementations.,True,Improve test portability - Use the built-in function for tabs and line delimiters in the expandMacros* methods instead of OSPlatform current lineDelimiter to easy the migration to other Smalltalk implementations.,1,improve test portability use the built in function for tabs and line delimiters in the expandmacros methods instead of osplatform current linedelimiter to easy the migration to other smalltalk implementations ,1
1455,21690957932.0,IssuesEvent,2022-05-09 15:19:20,damccorm/test-migration-target,https://api.github.com/repos/damccorm/test-migration-target,opened,Improve step names in portable runners,P3 runner-spark improvement portability-spark,"Step names are currently inconsistent between runners. This is probably unavoidable due to fusion, but we should design a system that is more consistent between runners so that most metric queries against the Fn API runner can also be used with the portable Flink and Spark runners.
Spark Runner:
MetricKey(step=ref_AppliedPTransform_count1_17, metric=MetricName(namespace=ns, name=counter), labels={}): 2
MetricKey(step=ref_AppliedPTransform_count2_18, metric=MetricName(namespace=ns, name=counter), labels={}): 4
...
Fn API Runner:
MetricKey(step=count1, metric=MetricName(namespace=ns, name=counter), labels={}): 2,
MetricKey(step=count2, metric=MetricName(namespace=ns, name=counter), labels={}): 4
Imported from Jira [BEAM-9997](https://issues.apache.org/jira/browse/BEAM-9997). Original Jira may contain additional context.
Reported by: ibzib.",True,"Improve step names in portable runners - Step names are currently inconsistent between runners. This is probably unavoidable due to fusion, but we should design a system that is more consistent between runners so that most metric queries against the Fn API runner can also be used with the portable Flink and Spark runners.
Spark Runner:
MetricKey(step=ref_AppliedPTransform_count1_17, metric=MetricName(namespace=ns, name=counter), labels={}): 2
MetricKey(step=ref_AppliedPTransform_count2_18, metric=MetricName(namespace=ns, name=counter), labels={}): 4
...
Fn API Runner:
MetricKey(step=count1, metric=MetricName(namespace=ns, name=counter), labels={}): 2,
MetricKey(step=count2, metric=MetricName(namespace=ns, name=counter), labels={}): 4
Imported from Jira [BEAM-9997](https://issues.apache.org/jira/browse/BEAM-9997). Original Jira may contain additional context.
Reported by: ibzib.",1,improve step names in portable runners step names are currently inconsistent between runners this is probably unavoidable due to fusion but we should design a system that is more consistent between runners so that most metric queries against the fn api runner can also be used with the portable flink and spark runners spark runner metrickey step ref appliedptransform metric metricname namespace ns name counter labels metrickey step ref appliedptransform metric metricname namespace ns name counter labels fn api runner metrickey step metric metricname namespace ns name counter labels metrickey step metric metricname namespace ns name counter labels imported from jira original jira may contain additional context reported by ibzib ,1
3585,3203793234.0,IssuesEvent,2015-10-02 20:55:23,opensim-org/opensim-core,https://api.github.com/repos/opensim-org/opensim-core,closed,master branch build failing on Travis (and splitting up python wrapping),Build,"Both gcc and clang appear to be failing with internal compiler errors. That's quite a trick since they are different compilers! The build is succeeding on AppVeyor (Windows).
Unless anyone is working on this I will try to reproduce it on my Ubuntu VM.
/cc @aseth1 @aymanhab @chrisdembia @klshrinidhi ",1.0,"master branch build failing on Travis (and splitting up python wrapping) - Both gcc and clang appear to be failing with internal compiler errors. That's quite a trick since they are different compilers! The build is succeeding on AppVeyor (Windows).
Unless anyone is working on this I will try to reproduce it on my Ubuntu VM.
/cc @aseth1 @aymanhab @chrisdembia @klshrinidhi ",0,master branch build failing on travis and splitting up python wrapping both gcc and clang appear to be failing with internal compiler errors that s quite a trick since they are different compilers the build is succeeding on appveyor windows unless anyone is working on this i will try to reproduce it on my ubuntu vm cc aymanhab chrisdembia klshrinidhi ,0
492325,14200560382.0,IssuesEvent,2020-11-16 05:46:29,oppia/oppia-android,https://api.github.com/repos/oppia/oppia-android,closed,Move app module test targets to the non-flaky queue in CircleCI [Blocked: #973],Priority: Essential Status: Blocked mini-project,"Currently there are several app module tests that are failing in robolectric. Due to this we have created a flaky_queue and a non_flaky_queue. We would like to fix all the app tests in robolectric so that we can move the app module tests to the non-flaky queue.
We will do this folder by folder for the app module tests, until we can move the entire app module.
The following folders need to be migrated:
- [ ] administratorcontrols
- [ ] completedstorylist
- [x] faq
- [x] help
- [ ] home
- [x] mydownloads
- [ ] onboarding
- [ ] ongoingtopiclist
- [ ] options
- [x] parser
- [ ] player
- [ ] profile
- [x] profileprogress
- [x] recyclerview
- [ ] settings/profile
- [x] splash
- [x] story
- [ ] testing
- [ ] topic
- [x] utility
- [ ] walkthrough
Blocked by #973",1.0,"Move app module test targets to the non-flaky queue in CircleCI [Blocked: #973] - Currently there are several app module tests that are failing in robolectric. Due to this we have created a flaky_queue and a non_flaky_queue. We would like to fix all the app tests in robolectric so that we can move the app module tests to the non-flaky queue.
We will do this folder by folder for the app module tests, until we can move the entire app module.
The following folders need to be migrated:
- [ ] administratorcontrols
- [ ] completedstorylist
- [x] faq
- [x] help
- [ ] home
- [x] mydownloads
- [ ] onboarding
- [ ] ongoingtopiclist
- [ ] options
- [x] parser
- [ ] player
- [ ] profile
- [x] profileprogress
- [x] recyclerview
- [ ] settings/profile
- [x] splash
- [x] story
- [ ] testing
- [ ] topic
- [x] utility
- [ ] walkthrough
Blocked by #973",0,move app module test targets to the non flaky queue in circleci currently there are several app module tests that are failing in robolectric due to this we have created a flaky queue and a non flaky queue we would like to fix all the app tests in robolectric so that we can move the app module tests to the non flaky queue we will do this folder by folder for the app module tests until we can move the entire app module the following folders need to be migrated administratorcontrols completedstorylist faq help home mydownloads onboarding ongoingtopiclist options parser player profile profileprogress recyclerview settings profile splash story testing topic utility walkthrough blocked by ,0
325958,24067341431.0,IssuesEvent,2022-09-17 17:32:07,brotkrueml/schema,https://api.github.com/repos/brotkrueml/schema,opened,Compatibility with TYPO3 v12,documentation feature,"- [ ] The extension is marked as compatible with TYPO3 v12.
- [ ] The content object registration is adapted to work with v11 and v12: https://docs.typo3.org/c/typo3/cms-core/main/en-us/Changelog/12.0/Breaking-96659-RegistrationOfCObjectsViaTYPO3_CONF_VARS.html
- [ ] A feature entry to the changelog is added.
- [ ] The documentation is adjusted.",1.0,"Compatibility with TYPO3 v12 - - [ ] The extension is marked as compatible with TYPO3 v12.
- [ ] The content object registration is adapted to work with v11 and v12: https://docs.typo3.org/c/typo3/cms-core/main/en-us/Changelog/12.0/Breaking-96659-RegistrationOfCObjectsViaTYPO3_CONF_VARS.html
- [ ] A feature entry to the changelog is added.
- [ ] The documentation is adjusted.",0,compatibility with the extension is marked as compatible with the content object registration is adapted to work with and a feature entry to the changelog is added the documentation is adjusted ,0
372590,11017326041.0,IssuesEvent,2019-12-05 08:09:15,rorylombardi/tradingpost,https://api.github.com/repos/rorylombardi/tradingpost,closed,Clarify that users are fully allowed & encouraged to contact users for a trade with the listed info on their user page,enhancement priority:high,"This uncertainty came up a few times, ""is it alright that I just send them a message""",1.0,"Clarify that users are fully allowed & encouraged to contact users for a trade with the listed info on their user page - This uncertainty came up a few times, ""is it alright that I just send them a message""",0,clarify that users are fully allowed encouraged to contact users for a trade with the listed info on their user page this uncertainty came up a few times is it alright that i just send them a message ,0
783,10325877472.0,IssuesEvent,2019-09-01 21:08:35,unitsofmeasurement/uom-demos,https://api.github.com/repos/unitsofmeasurement/uom-demos,closed,ME 8.2 on actual device,device device:raspberryPi documents help wanted portability portability:ME ready task,Java ME 8.2 SDK offers support for a few actual devices. Especially Raspberry Pi. Try to install it on an actual Raspberry Pi and document steps to reproduce with the demo.,True,ME 8.2 on actual device - Java ME 8.2 SDK offers support for a few actual devices. Especially Raspberry Pi. Try to install it on an actual Raspberry Pi and document steps to reproduce with the demo.,1,me on actual device java me sdk offers support for a few actual devices especially raspberry pi try to install it on an actual raspberry pi and document steps to reproduce with the demo ,1
439667,12685174834.0,IssuesEvent,2020-06-20 02:35:51,crcn/paperclip,https://api.github.com/repos/crcn/paperclip,closed,PC engine should always load from graph,priority: high,"if virtual content is needed, then add function `addVirtualFile(filePath, content)`",1.0,"PC engine should always load from graph - if virtual content is needed, then add function `addVirtualFile(filePath, content)`",0,pc engine should always load from graph if virtual content is needed then add function addvirtualfile filepath content ,0
28228,6970092036.0,IssuesEvent,2017-12-11 09:02:11,Megabyte918/MultiOgar-Edited,https://api.github.com/repos/Megabyte918/MultiOgar-Edited,closed, query,Custom Code,"
Hello, I know this is duplicated thousands of times with respect to the bots, but I would like to know if you can configure the bots separately from the players, this way
1-Botstartsize: Mass that generates the bot, different from the player's
2-botmaxcells: Maximas cells that can have a bot, which would be recommended 2 cells
, sorry for this question, but you could help me, thanks",1.0," query -
Hello, I know this is duplicated thousands of times with respect to the bots, but I would like to know if you can configure the bots separately from the players, this way
1-Botstartsize: Mass that generates the bot, different from the player's
2-botmaxcells: Maximas cells that can have a bot, which would be recommended 2 cells
, sorry for this question, but you could help me, thanks",0, query hello i know this is duplicated thousands of times with respect to the bots but i would like to know if you can configure the bots separately from the players this way botstartsize mass that generates the bot different from the player s botmaxcells maximas cells that can have a bot which would be recommended cells sorry for this question but you could help me thanks,0
701,9537368377.0,IssuesEvent,2019-04-30 12:20:59,CON-In-A-Box/CIAB-Signin,https://api.github.com/repos/CON-In-A-Box/CIAB-Signin,closed,Supportability: Understanding deployed versions,Supportability,"When looking at the front page of CIAB, there should be a version number findable within either the source, or in an unobtrusive place on the screen. In other products, this is sometimes shown with a full “Version: 1.2.3.4” tag in clear text, in other products it has been either hidden in the page source, or shown as white-on-white text (which can then be shown by “select-all” on the main page.",True,"Supportability: Understanding deployed versions - When looking at the front page of CIAB, there should be a version number findable within either the source, or in an unobtrusive place on the screen. In other products, this is sometimes shown with a full “Version: 1.2.3.4” tag in clear text, in other products it has been either hidden in the page source, or shown as white-on-white text (which can then be shown by “select-all” on the main page.",1,supportability understanding deployed versions when looking at the front page of ciab there should be a version number findable within either the source or in an unobtrusive place on the screen in other products this is sometimes shown with a full “version ” tag in clear text in other products it has been either hidden in the page source or shown as white on white text which can then be shown by “select all” on the main page ,1
596,8066579196.0,IssuesEvent,2018-08-04 17:32:40,dpteam/GLQuake3D,https://api.github.com/repos/dpteam/GLQuake3D,opened,Convert project to CMake,portability,"CMake makes it possible to easily create project for building on x64, as well as Makefiles or any other build systems for other platforms.
Requires solving #2 first.",True,"Convert project to CMake - CMake makes it possible to easily create project for building on x64, as well as Makefiles or any other build systems for other platforms.
Requires solving #2 first.",1,convert project to cmake cmake makes it possible to easily create project for building on as well as makefiles or any other build systems for other platforms requires solving first ,1
57415,6546848437.0,IssuesEvent,2017-09-04 12:15:46,opengeospatial/teamengine,https://api.github.com/repos/opengeospatial/teamengine,closed,Enhance CtlExecutor to return EARL,bug fix-needs-testing,Currently the CtlExecutor returns 'ctl format' regardless of the requested format. Expected is that the the request with header `Accept: application/rdf+xml` returns the EARL report.,1.0,Enhance CtlExecutor to return EARL - Currently the CtlExecutor returns 'ctl format' regardless of the requested format. Expected is that the the request with header `Accept: application/rdf+xml` returns the EARL report.,0,enhance ctlexecutor to return earl currently the ctlexecutor returns ctl format regardless of the requested format expected is that the the request with header accept application rdf xml returns the earl report ,0
26884,2688429255.0,IssuesEvent,2015-03-31 00:01:55,coreos/fleet,https://api.github.com/repos/coreos/fleet,closed,unit with lengthy deactivation procedure remains inactive (No such file or directory bug),bug high-priority,"I'm scheduling the following units `bar.service` and `baz.service` to a single-node cluster:
```
core@core-01 ~ $ cat bar.service
[Service]
ExecStart=/usr/bin/sleep infinity
core@core-01 ~ $ cat baz.service
[Unit]
After=bar.service
BindsTo=bar.service
[Service]
ExecStart=/usr/bin/sleep infinity
ExecStop=/usr/bin/sleep 20
```
First, start the two units:
```
core@core-01 ~ $ fleetctl start --no-block bar baz
Triggered unit bar.service start
Triggered unit baz.service start
```
Check the status of fleet and systemd:
```
core@core-01 ~ $ fleetctl list-unit-files && fleetctl list-units && systemctl status bar baz
UNIT HASH DSTATE STATE TARGET
bar.service 40ea664 launched launched a84622dd.../172.17.8.101
baz.service 221b757 launched launched a84622dd.../172.17.8.101
UNIT MACHINE ACTIVE SUB
bar.service a84622dd.../172.17.8.101 active running
baz.service a84622dd.../172.17.8.101 active running
● bar.service
Loaded: loaded (/run/fleet/units/bar.service; linked-runtime; vendor preset: disabled)
Active: active (running) since Fri 2015-03-20 03:21:50 UTC; 1s ago
Main PID: 2193 (sleep)
CGroup: /system.slice/bar.service
└─2193 /usr/bin/sleep infinity
Mar 20 03:21:50 core-01 systemd[1]: Starting bar.service...
Mar 20 03:21:50 core-01 systemd[1]: Started bar.service.
● baz.service
Loaded: loaded (/run/fleet/units/baz.service; linked-runtime; vendor preset: disabled)
Active: active (running) since Fri 2015-03-20 03:21:50 UTC; 1s ago
Main PID: 2194 (sleep)
CGroup: /system.slice/baz.service
└─2194 /usr/bin/sleep infinity
Mar 20 03:21:50 core-01 systemd[1]: Starting baz.service...
Mar 20 03:21:50 core-01 systemd[1]: Started baz.service.
```
Everything is OK. Now unload the units:
```
core@core-01 ~ $ fleetctl unload --no-block bar baz
Triggered unit bar.service unload
Triggered unit baz.service unload
core@core-01 ~ $ fleetctl list-unit-files && fleetctl list-units && systemctl status bar baz
UNIT HASH DSTATE STATE TARGET
bar.service 40ea664 inactive inactive -
baz.service 221b757 inactive inactive -
UNIT MACHINE ACTIVE SUB
● bar.service
Loaded: loaded (/run/fleet/units/bar.service; linked-runtime; vendor preset: disabled)
Active: active (running) since Fri 2015-03-20 03:21:50 UTC; 12s ago
Main PID: 2193 (sleep)
CGroup: /system.slice/bar.service
└─2193 /usr/bin/sleep infinity
Mar 20 03:21:50 core-01 systemd[1]: Starting bar.service...
Mar 20 03:21:50 core-01 systemd[1]: Started bar.service.
Warning: Unit file changed on disk, 'systemctl daemon-reload' recommended.
● baz.service
Loaded: loaded (/run/fleet/units/baz.service; linked-runtime; vendor preset: disabled)
Active: deactivating (stop) since Fri 2015-03-20 03:22:00 UTC; 2s ago
Main PID: 2194 (sleep); : 2215 (sleep)
CGroup: /system.slice/baz.service
├─2194 /usr/bin/sleep infinity
└─control
└─2215 /usr/bin/sleep 20
Mar 20 03:21:50 core-01 systemd[1]: Starting baz.service...
Mar 20 03:21:50 core-01 systemd[1]: Started baz.service.
Mar 20 03:22:00 core-01 systemd[1]: Stopping baz.service...
Warning: Unit file changed on disk, 'systemctl daemon-reload' recommended.
```
fleetctl stops reporting state for the units immediately, but `baz.service` is still `deactivating`. Now start the two units again before `baz.service` finishes its `ExecStop`:
```
core@core-01 ~ $ fleetctl start --no-block bar baz
Triggered unit bar.service start
Triggered unit baz.service start
```
Check the status of fleet and systemd immediately:
```
core@core-01 ~ $ fleetctl list-unit-files && fleetctl list-units && systemctl status bar baz
UNIT HASH DSTATE STATE TARGET
bar.service 40ea664 launched launched a84622dd.../172.17.8.101
baz.service 221b757 launched launched a84622dd.../172.17.8.101
UNIT MACHINE ACTIVE SUB
bar.service a84622dd.../172.17.8.101 active running
baz.service a84622dd.../172.17.8.101 deactivating stop
● bar.service
Loaded: loaded (/run/fleet/units/bar.service; linked-runtime; vendor preset: disabled)
Active: active (running) since Fri 2015-03-20 03:22:15 UTC; 2s ago
Main PID: 2269 (sleep)
CGroup: /system.slice/bar.service
└─2269 /usr/bin/sleep infinity
Mar 20 03:22:15 core-01 systemd[1]: Started bar.service.
● baz.service
Loaded: not-found (Reason: No such file or directory)
Active: deactivating (stop) since Fri 2015-03-20 03:22:00 UTC; 17s ago
Main PID: 2194 (sleep); : 2215 (sleep)
CGroup: /system.slice/baz.service
├─2194 /usr/bin/sleep infinity
└─control
└─2215 /usr/bin/sleep 20
Mar 20 03:21:50 core-01 systemd[1]: Starting baz.service...
Mar 20 03:21:50 core-01 systemd[1]: Started baz.service.
Mar 20 03:22:00 core-01 systemd[1]: Stopping baz.service...
```
`baz.service` is still not done deactivating, but oddly enough, it's still `not-found`. Checking the status after deactivation is complete:
```
core@core-01 ~ $ fleetctl list-unit-files && fleetctl list-units && systemctl status bar baz
UNIT HASH DSTATE STATE TARGET
bar.service 40ea664 launched launched a84622dd.../172.17.8.101
baz.service 221b757 launched launched a84622dd.../172.17.8.101
UNIT MACHINE ACTIVE SUB
bar.service a84622dd.../172.17.8.101 active running
baz.service a84622dd.../172.17.8.101 inactive dead
● bar.service
Loaded: loaded (/run/fleet/units/bar.service; linked-runtime; vendor preset: disabled)
Active: active (running) since Fri 2015-03-20 03:22:15 UTC; 10s ago
Main PID: 2269 (sleep)
CGroup: /system.slice/bar.service
└─2269 /usr/bin/sleep infinity
Mar 20 03:22:15 core-01 systemd[1]: Started bar.service.
● baz.service
Loaded: loaded (/run/fleet/units/baz.service; linked-runtime; vendor preset: disabled)
Active: inactive (dead)
Mar 20 03:20:33 core-01 systemd[1]: Starting baz.service...
Mar 20 03:20:38 core-01 systemd[1]: Starting baz.service...
Mar 20 03:20:38 core-01 systemd[1]: Starting baz.service...
Mar 20 03:20:38 core-01 systemd[1]: Started baz.service.
Mar 20 03:21:18 core-01 systemd[1]: Stopping baz.service...
Mar 20 03:21:38 core-01 systemd[1]: Stopped baz.service.
Mar 20 03:21:50 core-01 systemd[1]: Starting baz.service...
Mar 20 03:21:50 core-01 systemd[1]: Started baz.service.
Mar 20 03:22:00 core-01 systemd[1]: Stopping baz.service...
Mar 20 03:22:20 core-01 systemd[1]: Stopped baz.service.
```
Now `baz.service` is `inactive`. Given that I just called `fleetctl start` on it, though, I would expect it to be `active`. Checking the logs, I see the dreaded `No such file or directory` error (fifth from the bottom):
```
Mar 20 03:21:49 core-01 fleetd[963]: INFO engine.go:272: Scheduled Unit(bar.service) to Machine(a84622dda07549d0b4d855ca2b78948c)
Mar 20 03:21:49 core-01 fleetd[963]: INFO reconciler.go:163: EngineReconciler completed task: {Type: AttemptScheduleUnit, JobName: bar.service, MachineID: a84622dda07549d0b4d855ca2b78948c, Reason: ""target state launched and unit not scheduled""}
Mar 20 03:21:49 core-01 fleetd[963]: INFO engine.go:272: Scheduled Unit(baz.service) to Machine(a84622dda07549d0b4d855ca2b78948c)
Mar 20 03:21:49 core-01 fleetd[963]: INFO reconciler.go:163: EngineReconciler completed task: {Type: AttemptScheduleUnit, JobName: baz.service, MachineID: a84622dda07549d0b4d855ca2b78948c, Reason: ""target state launched and unit not scheduled""}
Mar 20 03:21:50 core-01 fleetd[963]: INFO manager.go:262: Writing systemd unit bar.service (44b)
Mar 20 03:21:50 core-01 fleetd[963]: INFO manager.go:262: Writing systemd unit baz.service (117b)
Mar 20 03:21:50 core-01 fleetd[963]: INFO manager.go:134: Triggered systemd unit bar.service start: job=7560
Mar 20 03:21:50 core-01 fleetd[963]: INFO manager.go:134: Triggered systemd unit baz.service start: job=7640
Mar 20 03:21:50 core-01 fleetd[963]: INFO reconcile.go:311: AgentReconciler completed task: type=LoadUnit job=bar.service reason=""unit scheduled here but not loaded""
Mar 20 03:21:50 core-01 fleetd[963]: INFO reconcile.go:311: AgentReconciler completed task: type=LoadUnit job=baz.service reason=""unit scheduled here but not loaded""
Mar 20 03:21:50 core-01 fleetd[963]: INFO reconcile.go:311: AgentReconciler completed task: type=StartUnit job=bar.service reason=""unit currently loaded but desired state is launched""
Mar 20 03:21:50 core-01 fleetd[963]: INFO reconcile.go:311: AgentReconciler completed task: type=StartUnit job=baz.service reason=""unit currently loaded but desired state is launched""
Mar 20 03:22:00 core-01 fleetd[963]: INFO manager.go:145: Triggered systemd unit bar.service stop: job=7721
Mar 20 03:22:00 core-01 fleetd[963]: INFO manager.go:275: Removing systemd unit bar.service
Mar 20 03:22:00 core-01 fleetd[963]: INFO manager.go:145: Triggered systemd unit baz.service stop: job=7722
Mar 20 03:22:00 core-01 fleetd[963]: INFO manager.go:275: Removing systemd unit baz.service
Mar 20 03:22:00 core-01 fleetd[963]: INFO reconcile.go:311: AgentReconciler completed task: type=UnloadUnit job=bar.service reason=""unit loaded but not scheduled here""
Mar 20 03:22:00 core-01 fleetd[963]: INFO reconcile.go:311: AgentReconciler completed task: type=UnloadUnit job=baz.service reason=""unit loaded but not scheduled here""
Mar 20 03:22:00 core-01 fleetd[963]: INFO engine.go:257: Unscheduled Job(bar.service) from Machine(a84622dda07549d0b4d855ca2b78948c)
Mar 20 03:22:00 core-01 fleetd[963]: INFO reconciler.go:163: EngineReconciler completed task: {Type: UnscheduleUnit, JobName: bar.service, MachineID: a84622dda07549d0b4d855ca2b78948c, Reason: ""target state inactive""}
Mar 20 03:22:00 core-01 fleetd[963]: INFO engine.go:257: Unscheduled Job(baz.service) from Machine(a84622dda07549d0b4d855ca2b78948c)
Mar 20 03:22:00 core-01 fleetd[963]: INFO reconciler.go:163: EngineReconciler completed task: {Type: UnscheduleUnit, JobName: baz.service, MachineID: a84622dda07549d0b4d855ca2b78948c, Reason: ""target state inactive""}
Mar 20 03:22:14 core-01 fleetd[963]: INFO engine.go:272: Scheduled Unit(bar.service) to Machine(a84622dda07549d0b4d855ca2b78948c)
Mar 20 03:22:14 core-01 fleetd[963]: INFO reconciler.go:163: EngineReconciler completed task: {Type: AttemptScheduleUnit, JobName: bar.service, MachineID: a84622dda07549d0b4d855ca2b78948c, Reason: ""target state launched and unit not scheduled""}
Mar 20 03:22:14 core-01 fleetd[963]: INFO engine.go:272: Scheduled Unit(baz.service) to Machine(a84622dda07549d0b4d855ca2b78948c)
Mar 20 03:22:14 core-01 fleetd[963]: INFO reconciler.go:163: EngineReconciler completed task: {Type: AttemptScheduleUnit, JobName: baz.service, MachineID: a84622dda07549d0b4d855ca2b78948c, Reason: ""target state launched and unit not scheduled""}
Mar 20 03:22:15 core-01 fleetd[963]: INFO manager.go:262: Writing systemd unit bar.service (44b)
Mar 20 03:22:15 core-01 fleetd[963]: INFO manager.go:198: Instructing systemd to reload units
Mar 20 03:22:15 core-01 fleetd[963]: INFO manager.go:262: Writing systemd unit baz.service (117b)
Mar 20 03:22:15 core-01 fleetd[963]: INFO manager.go:134: Triggered systemd unit bar.service start: job=7806
Mar 20 03:22:15 core-01 fleetd[963]: ERROR manager.go:136: Failed to trigger systemd unit baz.service start: Unit baz.service failed to load: No such file or directory.
Mar 20 03:22:15 core-01 fleetd[963]: INFO reconcile.go:311: AgentReconciler completed task: type=LoadUnit job=bar.service reason=""unit scheduled here but not loaded""
Mar 20 03:22:15 core-01 fleetd[963]: INFO reconcile.go:311: AgentReconciler completed task: type=LoadUnit job=baz.service reason=""unit scheduled here but not loaded""
Mar 20 03:22:15 core-01 fleetd[963]: INFO reconcile.go:311: AgentReconciler completed task: type=StartUnit job=bar.service reason=""unit currently loaded but desired state is launched""
Mar 20 03:22:15 core-01 fleetd[963]: INFO reconcile.go:311: AgentReconciler completed task: type=StartUnit job=baz.service reason=""unit currently loaded but desired state is launched""
```",1.0,"unit with lengthy deactivation procedure remains inactive (No such file or directory bug) - I'm scheduling the following units `bar.service` and `baz.service` to a single-node cluster:
```
core@core-01 ~ $ cat bar.service
[Service]
ExecStart=/usr/bin/sleep infinity
core@core-01 ~ $ cat baz.service
[Unit]
After=bar.service
BindsTo=bar.service
[Service]
ExecStart=/usr/bin/sleep infinity
ExecStop=/usr/bin/sleep 20
```
First, start the two units:
```
core@core-01 ~ $ fleetctl start --no-block bar baz
Triggered unit bar.service start
Triggered unit baz.service start
```
Check the status of fleet and systemd:
```
core@core-01 ~ $ fleetctl list-unit-files && fleetctl list-units && systemctl status bar baz
UNIT HASH DSTATE STATE TARGET
bar.service 40ea664 launched launched a84622dd.../172.17.8.101
baz.service 221b757 launched launched a84622dd.../172.17.8.101
UNIT MACHINE ACTIVE SUB
bar.service a84622dd.../172.17.8.101 active running
baz.service a84622dd.../172.17.8.101 active running
● bar.service
Loaded: loaded (/run/fleet/units/bar.service; linked-runtime; vendor preset: disabled)
Active: active (running) since Fri 2015-03-20 03:21:50 UTC; 1s ago
Main PID: 2193 (sleep)
CGroup: /system.slice/bar.service
└─2193 /usr/bin/sleep infinity
Mar 20 03:21:50 core-01 systemd[1]: Starting bar.service...
Mar 20 03:21:50 core-01 systemd[1]: Started bar.service.
● baz.service
Loaded: loaded (/run/fleet/units/baz.service; linked-runtime; vendor preset: disabled)
Active: active (running) since Fri 2015-03-20 03:21:50 UTC; 1s ago
Main PID: 2194 (sleep)
CGroup: /system.slice/baz.service
└─2194 /usr/bin/sleep infinity
Mar 20 03:21:50 core-01 systemd[1]: Starting baz.service...
Mar 20 03:21:50 core-01 systemd[1]: Started baz.service.
```
Everything is OK. Now unload the units:
```
core@core-01 ~ $ fleetctl unload --no-block bar baz
Triggered unit bar.service unload
Triggered unit baz.service unload
core@core-01 ~ $ fleetctl list-unit-files && fleetctl list-units && systemctl status bar baz
UNIT HASH DSTATE STATE TARGET
bar.service 40ea664 inactive inactive -
baz.service 221b757 inactive inactive -
UNIT MACHINE ACTIVE SUB
● bar.service
Loaded: loaded (/run/fleet/units/bar.service; linked-runtime; vendor preset: disabled)
Active: active (running) since Fri 2015-03-20 03:21:50 UTC; 12s ago
Main PID: 2193 (sleep)
CGroup: /system.slice/bar.service
└─2193 /usr/bin/sleep infinity
Mar 20 03:21:50 core-01 systemd[1]: Starting bar.service...
Mar 20 03:21:50 core-01 systemd[1]: Started bar.service.
Warning: Unit file changed on disk, 'systemctl daemon-reload' recommended.
● baz.service
Loaded: loaded (/run/fleet/units/baz.service; linked-runtime; vendor preset: disabled)
Active: deactivating (stop) since Fri 2015-03-20 03:22:00 UTC; 2s ago
Main PID: 2194 (sleep); : 2215 (sleep)
CGroup: /system.slice/baz.service
├─2194 /usr/bin/sleep infinity
└─control
└─2215 /usr/bin/sleep 20
Mar 20 03:21:50 core-01 systemd[1]: Starting baz.service...
Mar 20 03:21:50 core-01 systemd[1]: Started baz.service.
Mar 20 03:22:00 core-01 systemd[1]: Stopping baz.service...
Warning: Unit file changed on disk, 'systemctl daemon-reload' recommended.
```
fleetctl stops reporting state for the units immediately, but `baz.service` is still `deactivating`. Now start the two units again before `baz.service` finishes its `ExecStop`:
```
core@core-01 ~ $ fleetctl start --no-block bar baz
Triggered unit bar.service start
Triggered unit baz.service start
```
Check the status of fleet and systemd immediately:
```
core@core-01 ~ $ fleetctl list-unit-files && fleetctl list-units && systemctl status bar baz
UNIT HASH DSTATE STATE TARGET
bar.service 40ea664 launched launched a84622dd.../172.17.8.101
baz.service 221b757 launched launched a84622dd.../172.17.8.101
UNIT MACHINE ACTIVE SUB
bar.service a84622dd.../172.17.8.101 active running
baz.service a84622dd.../172.17.8.101 deactivating stop
● bar.service
Loaded: loaded (/run/fleet/units/bar.service; linked-runtime; vendor preset: disabled)
Active: active (running) since Fri 2015-03-20 03:22:15 UTC; 2s ago
Main PID: 2269 (sleep)
CGroup: /system.slice/bar.service
└─2269 /usr/bin/sleep infinity
Mar 20 03:22:15 core-01 systemd[1]: Started bar.service.
● baz.service
Loaded: not-found (Reason: No such file or directory)
Active: deactivating (stop) since Fri 2015-03-20 03:22:00 UTC; 17s ago
Main PID: 2194 (sleep); : 2215 (sleep)
CGroup: /system.slice/baz.service
├─2194 /usr/bin/sleep infinity
└─control
└─2215 /usr/bin/sleep 20
Mar 20 03:21:50 core-01 systemd[1]: Starting baz.service...
Mar 20 03:21:50 core-01 systemd[1]: Started baz.service.
Mar 20 03:22:00 core-01 systemd[1]: Stopping baz.service...
```
`baz.service` is still not done deactivating, but oddly enough, it's still `not-found`. Checking the status after deactivation is complete:
```
core@core-01 ~ $ fleetctl list-unit-files && fleetctl list-units && systemctl status bar baz
UNIT HASH DSTATE STATE TARGET
bar.service 40ea664 launched launched a84622dd.../172.17.8.101
baz.service 221b757 launched launched a84622dd.../172.17.8.101
UNIT MACHINE ACTIVE SUB
bar.service a84622dd.../172.17.8.101 active running
baz.service a84622dd.../172.17.8.101 inactive dead
● bar.service
Loaded: loaded (/run/fleet/units/bar.service; linked-runtime; vendor preset: disabled)
Active: active (running) since Fri 2015-03-20 03:22:15 UTC; 10s ago
Main PID: 2269 (sleep)
CGroup: /system.slice/bar.service
└─2269 /usr/bin/sleep infinity
Mar 20 03:22:15 core-01 systemd[1]: Started bar.service.
● baz.service
Loaded: loaded (/run/fleet/units/baz.service; linked-runtime; vendor preset: disabled)
Active: inactive (dead)
Mar 20 03:20:33 core-01 systemd[1]: Starting baz.service...
Mar 20 03:20:38 core-01 systemd[1]: Starting baz.service...
Mar 20 03:20:38 core-01 systemd[1]: Starting baz.service...
Mar 20 03:20:38 core-01 systemd[1]: Started baz.service.
Mar 20 03:21:18 core-01 systemd[1]: Stopping baz.service...
Mar 20 03:21:38 core-01 systemd[1]: Stopped baz.service.
Mar 20 03:21:50 core-01 systemd[1]: Starting baz.service...
Mar 20 03:21:50 core-01 systemd[1]: Started baz.service.
Mar 20 03:22:00 core-01 systemd[1]: Stopping baz.service...
Mar 20 03:22:20 core-01 systemd[1]: Stopped baz.service.
```
Now `baz.service` is `inactive`. Given that I just called `fleetctl start` on it, though, I would expect it to be `active`. Checking the logs, I see the dreaded `No such file or directory` error (fifth from the bottom):
```
Mar 20 03:21:49 core-01 fleetd[963]: INFO engine.go:272: Scheduled Unit(bar.service) to Machine(a84622dda07549d0b4d855ca2b78948c)
Mar 20 03:21:49 core-01 fleetd[963]: INFO reconciler.go:163: EngineReconciler completed task: {Type: AttemptScheduleUnit, JobName: bar.service, MachineID: a84622dda07549d0b4d855ca2b78948c, Reason: ""target state launched and unit not scheduled""}
Mar 20 03:21:49 core-01 fleetd[963]: INFO engine.go:272: Scheduled Unit(baz.service) to Machine(a84622dda07549d0b4d855ca2b78948c)
Mar 20 03:21:49 core-01 fleetd[963]: INFO reconciler.go:163: EngineReconciler completed task: {Type: AttemptScheduleUnit, JobName: baz.service, MachineID: a84622dda07549d0b4d855ca2b78948c, Reason: ""target state launched and unit not scheduled""}
Mar 20 03:21:50 core-01 fleetd[963]: INFO manager.go:262: Writing systemd unit bar.service (44b)
Mar 20 03:21:50 core-01 fleetd[963]: INFO manager.go:262: Writing systemd unit baz.service (117b)
Mar 20 03:21:50 core-01 fleetd[963]: INFO manager.go:134: Triggered systemd unit bar.service start: job=7560
Mar 20 03:21:50 core-01 fleetd[963]: INFO manager.go:134: Triggered systemd unit baz.service start: job=7640
Mar 20 03:21:50 core-01 fleetd[963]: INFO reconcile.go:311: AgentReconciler completed task: type=LoadUnit job=bar.service reason=""unit scheduled here but not loaded""
Mar 20 03:21:50 core-01 fleetd[963]: INFO reconcile.go:311: AgentReconciler completed task: type=LoadUnit job=baz.service reason=""unit scheduled here but not loaded""
Mar 20 03:21:50 core-01 fleetd[963]: INFO reconcile.go:311: AgentReconciler completed task: type=StartUnit job=bar.service reason=""unit currently loaded but desired state is launched""
Mar 20 03:21:50 core-01 fleetd[963]: INFO reconcile.go:311: AgentReconciler completed task: type=StartUnit job=baz.service reason=""unit currently loaded but desired state is launched""
Mar 20 03:22:00 core-01 fleetd[963]: INFO manager.go:145: Triggered systemd unit bar.service stop: job=7721
Mar 20 03:22:00 core-01 fleetd[963]: INFO manager.go:275: Removing systemd unit bar.service
Mar 20 03:22:00 core-01 fleetd[963]: INFO manager.go:145: Triggered systemd unit baz.service stop: job=7722
Mar 20 03:22:00 core-01 fleetd[963]: INFO manager.go:275: Removing systemd unit baz.service
Mar 20 03:22:00 core-01 fleetd[963]: INFO reconcile.go:311: AgentReconciler completed task: type=UnloadUnit job=bar.service reason=""unit loaded but not scheduled here""
Mar 20 03:22:00 core-01 fleetd[963]: INFO reconcile.go:311: AgentReconciler completed task: type=UnloadUnit job=baz.service reason=""unit loaded but not scheduled here""
Mar 20 03:22:00 core-01 fleetd[963]: INFO engine.go:257: Unscheduled Job(bar.service) from Machine(a84622dda07549d0b4d855ca2b78948c)
Mar 20 03:22:00 core-01 fleetd[963]: INFO reconciler.go:163: EngineReconciler completed task: {Type: UnscheduleUnit, JobName: bar.service, MachineID: a84622dda07549d0b4d855ca2b78948c, Reason: ""target state inactive""}
Mar 20 03:22:00 core-01 fleetd[963]: INFO engine.go:257: Unscheduled Job(baz.service) from Machine(a84622dda07549d0b4d855ca2b78948c)
Mar 20 03:22:00 core-01 fleetd[963]: INFO reconciler.go:163: EngineReconciler completed task: {Type: UnscheduleUnit, JobName: baz.service, MachineID: a84622dda07549d0b4d855ca2b78948c, Reason: ""target state inactive""}
Mar 20 03:22:14 core-01 fleetd[963]: INFO engine.go:272: Scheduled Unit(bar.service) to Machine(a84622dda07549d0b4d855ca2b78948c)
Mar 20 03:22:14 core-01 fleetd[963]: INFO reconciler.go:163: EngineReconciler completed task: {Type: AttemptScheduleUnit, JobName: bar.service, MachineID: a84622dda07549d0b4d855ca2b78948c, Reason: ""target state launched and unit not scheduled""}
Mar 20 03:22:14 core-01 fleetd[963]: INFO engine.go:272: Scheduled Unit(baz.service) to Machine(a84622dda07549d0b4d855ca2b78948c)
Mar 20 03:22:14 core-01 fleetd[963]: INFO reconciler.go:163: EngineReconciler completed task: {Type: AttemptScheduleUnit, JobName: baz.service, MachineID: a84622dda07549d0b4d855ca2b78948c, Reason: ""target state launched and unit not scheduled""}
Mar 20 03:22:15 core-01 fleetd[963]: INFO manager.go:262: Writing systemd unit bar.service (44b)
Mar 20 03:22:15 core-01 fleetd[963]: INFO manager.go:198: Instructing systemd to reload units
Mar 20 03:22:15 core-01 fleetd[963]: INFO manager.go:262: Writing systemd unit baz.service (117b)
Mar 20 03:22:15 core-01 fleetd[963]: INFO manager.go:134: Triggered systemd unit bar.service start: job=7806
Mar 20 03:22:15 core-01 fleetd[963]: ERROR manager.go:136: Failed to trigger systemd unit baz.service start: Unit baz.service failed to load: No such file or directory.
Mar 20 03:22:15 core-01 fleetd[963]: INFO reconcile.go:311: AgentReconciler completed task: type=LoadUnit job=bar.service reason=""unit scheduled here but not loaded""
Mar 20 03:22:15 core-01 fleetd[963]: INFO reconcile.go:311: AgentReconciler completed task: type=LoadUnit job=baz.service reason=""unit scheduled here but not loaded""
Mar 20 03:22:15 core-01 fleetd[963]: INFO reconcile.go:311: AgentReconciler completed task: type=StartUnit job=bar.service reason=""unit currently loaded but desired state is launched""
Mar 20 03:22:15 core-01 fleetd[963]: INFO reconcile.go:311: AgentReconciler completed task: type=StartUnit job=baz.service reason=""unit currently loaded but desired state is launched""
```",0,unit with lengthy deactivation procedure remains inactive no such file or directory bug i m scheduling the following units bar service and baz service to a single node cluster core core cat bar service execstart usr bin sleep infinity core core cat baz service after bar service bindsto bar service execstart usr bin sleep infinity execstop usr bin sleep first start the two units core core fleetctl start no block bar baz triggered unit bar service start triggered unit baz service start check the status of fleet and systemd core core fleetctl list unit files fleetctl list units systemctl status bar baz unit hash dstate state target bar service launched launched baz service launched launched unit machine active sub bar service active running baz service active running ● bar service loaded loaded run fleet units bar service linked runtime vendor preset disabled active active running since fri utc ago main pid sleep cgroup system slice bar service └─ usr bin sleep infinity mar core systemd starting bar service mar core systemd started bar service ● baz service loaded loaded run fleet units baz service linked runtime vendor preset disabled active active running since fri utc ago main pid sleep cgroup system slice baz service └─ usr bin sleep infinity mar core systemd starting baz service mar core systemd started baz service everything is ok now unload the units core core fleetctl unload no block bar baz triggered unit bar service unload triggered unit baz service unload core core fleetctl list unit files fleetctl list units systemctl status bar baz unit hash dstate state target bar service inactive inactive baz service inactive inactive unit machine active sub ● bar service loaded loaded run fleet units bar service linked runtime vendor preset disabled active active running since fri utc ago main pid sleep cgroup system slice bar service └─ usr bin sleep infinity mar core systemd starting bar service mar core systemd started bar service warning unit file changed on disk systemctl daemon reload recommended ● baz service loaded loaded run fleet units baz service linked runtime vendor preset disabled active deactivating stop since fri utc ago main pid sleep sleep cgroup system slice baz service ├─ usr bin sleep infinity └─control └─ usr bin sleep mar core systemd starting baz service mar core systemd started baz service mar core systemd stopping baz service warning unit file changed on disk systemctl daemon reload recommended fleetctl stops reporting state for the units immediately but baz service is still deactivating now start the two units again before baz service finishes its execstop core core fleetctl start no block bar baz triggered unit bar service start triggered unit baz service start check the status of fleet and systemd immediately core core fleetctl list unit files fleetctl list units systemctl status bar baz unit hash dstate state target bar service launched launched baz service launched launched unit machine active sub bar service active running baz service deactivating stop ● bar service loaded loaded run fleet units bar service linked runtime vendor preset disabled active active running since fri utc ago main pid sleep cgroup system slice bar service └─ usr bin sleep infinity mar core systemd started bar service ● baz service loaded not found reason no such file or directory active deactivating stop since fri utc ago main pid sleep sleep cgroup system slice baz service ├─ usr bin sleep infinity └─control └─ usr bin sleep mar core systemd starting baz service mar core systemd started baz service mar core systemd stopping baz service baz service is still not done deactivating but oddly enough it s still not found checking the status after deactivation is complete core core fleetctl list unit files fleetctl list units systemctl status bar baz unit hash dstate state target bar service launched launched baz service launched launched unit machine active sub bar service active running baz service inactive dead ● bar service loaded loaded run fleet units bar service linked runtime vendor preset disabled active active running since fri utc ago main pid sleep cgroup system slice bar service └─ usr bin sleep infinity mar core systemd started bar service ● baz service loaded loaded run fleet units baz service linked runtime vendor preset disabled active inactive dead mar core systemd starting baz service mar core systemd starting baz service mar core systemd starting baz service mar core systemd started baz service mar core systemd stopping baz service mar core systemd stopped baz service mar core systemd starting baz service mar core systemd started baz service mar core systemd stopping baz service mar core systemd stopped baz service now baz service is inactive given that i just called fleetctl start on it though i would expect it to be active checking the logs i see the dreaded no such file or directory error fifth from the bottom mar core fleetd info engine go scheduled unit bar service to machine mar core fleetd info reconciler go enginereconciler completed task type attemptscheduleunit jobname bar service machineid reason target state launched and unit not scheduled mar core fleetd info engine go scheduled unit baz service to machine mar core fleetd info reconciler go enginereconciler completed task type attemptscheduleunit jobname baz service machineid reason target state launched and unit not scheduled mar core fleetd info manager go writing systemd unit bar service mar core fleetd info manager go writing systemd unit baz service mar core fleetd info manager go triggered systemd unit bar service start job mar core fleetd info manager go triggered systemd unit baz service start job mar core fleetd info reconcile go agentreconciler completed task type loadunit job bar service reason unit scheduled here but not loaded mar core fleetd info reconcile go agentreconciler completed task type loadunit job baz service reason unit scheduled here but not loaded mar core fleetd info reconcile go agentreconciler completed task type startunit job bar service reason unit currently loaded but desired state is launched mar core fleetd info reconcile go agentreconciler completed task type startunit job baz service reason unit currently loaded but desired state is launched mar core fleetd info manager go triggered systemd unit bar service stop job mar core fleetd info manager go removing systemd unit bar service mar core fleetd info manager go triggered systemd unit baz service stop job mar core fleetd info manager go removing systemd unit baz service mar core fleetd info reconcile go agentreconciler completed task type unloadunit job bar service reason unit loaded but not scheduled here mar core fleetd info reconcile go agentreconciler completed task type unloadunit job baz service reason unit loaded but not scheduled here mar core fleetd info engine go unscheduled job bar service from machine mar core fleetd info reconciler go enginereconciler completed task type unscheduleunit jobname bar service machineid reason target state inactive mar core fleetd info engine go unscheduled job baz service from machine mar core fleetd info reconciler go enginereconciler completed task type unscheduleunit jobname baz service machineid reason target state inactive mar core fleetd info engine go scheduled unit bar service to machine mar core fleetd info reconciler go enginereconciler completed task type attemptscheduleunit jobname bar service machineid reason target state launched and unit not scheduled mar core fleetd info engine go scheduled unit baz service to machine mar core fleetd info reconciler go enginereconciler completed task type attemptscheduleunit jobname baz service machineid reason target state launched and unit not scheduled mar core fleetd info manager go writing systemd unit bar service mar core fleetd info manager go instructing systemd to reload units mar core fleetd info manager go writing systemd unit baz service mar core fleetd info manager go triggered systemd unit bar service start job mar core fleetd error manager go failed to trigger systemd unit baz service start unit baz service failed to load no such file or directory mar core fleetd info reconcile go agentreconciler completed task type loadunit job bar service reason unit scheduled here but not loaded mar core fleetd info reconcile go agentreconciler completed task type loadunit job baz service reason unit scheduled here but not loaded mar core fleetd info reconcile go agentreconciler completed task type startunit job bar service reason unit currently loaded but desired state is launched mar core fleetd info reconcile go agentreconciler completed task type startunit job baz service reason unit currently loaded but desired state is launched ,0
296,5513207230.0,IssuesEvent,2017-03-17 11:48:20,antirez/redis,https://api.github.com/repos/antirez/redis,closed,redis server version 3.2.0 and above crash on armv5tejl,crash report portability,"Hi, I'm evaluating redis on a small arm linux platform.
Server version 3.0.0 and below works, but 3.2.0 and above does not.
A simple hmset will crash the server immediately, see attached log.
Sometimes I'm able to add a couple of keys before the server dies, but I get a nil value when trying to get key value.
Memory is set to 10mb.
the server binary is cross-compiled on ubuntu 14.04.03 using arm-linux-gnueabi 4.7.3.
libc is used.
Have I missed something obvious?
br
[redis-3.2.0.log.zip](https://github.com/antirez/redis/files/608522/redis-3.2.0.log.zip)
",True,"redis server version 3.2.0 and above crash on armv5tejl - Hi, I'm evaluating redis on a small arm linux platform.
Server version 3.0.0 and below works, but 3.2.0 and above does not.
A simple hmset will crash the server immediately, see attached log.
Sometimes I'm able to add a couple of keys before the server dies, but I get a nil value when trying to get key value.
Memory is set to 10mb.
the server binary is cross-compiled on ubuntu 14.04.03 using arm-linux-gnueabi 4.7.3.
libc is used.
Have I missed something obvious?
br
[redis-3.2.0.log.zip](https://github.com/antirez/redis/files/608522/redis-3.2.0.log.zip)
",1,redis server version and above crash on hi i m evaluating redis on a small arm linux platform server version and below works but and above does not a simple hmset will crash the server immediately see attached log sometimes i m able to add a couple of keys before the server dies but i get a nil value when trying to get key value memory is set to the server binary is cross compiled on ubuntu using arm linux gnueabi libc is used have i missed something obvious br ,1
1915,30122442151.0,IssuesEvent,2023-06-30 16:16:33,zcash/zcash,https://api.github.com/repos/zcash/zcash,closed,Create package(s) that work on Ubuntu,A-dependencies portability,"Ideally, Ubuntu Trusty and later should be supported.
Note that Trusty has gcc 4.8 which uses a different version of libgomp; I *think* it is sufficient to just compile with a later gcc and declare a dependency on the correct libgomp, but if that doesn't work then it might be necessary to have more than one package.
[Edit: oh, just declaring a dependency doesn't work because the correct libgomp is not in the default Ubuntu Trusty repos.]",True,"Create package(s) that work on Ubuntu - Ideally, Ubuntu Trusty and later should be supported.
Note that Trusty has gcc 4.8 which uses a different version of libgomp; I *think* it is sufficient to just compile with a later gcc and declare a dependency on the correct libgomp, but if that doesn't work then it might be necessary to have more than one package.
[Edit: oh, just declaring a dependency doesn't work because the correct libgomp is not in the default Ubuntu Trusty repos.]",1,create package s that work on ubuntu ideally ubuntu trusty and later should be supported note that trusty has gcc which uses a different version of libgomp i think it is sufficient to just compile with a later gcc and declare a dependency on the correct libgomp but if that doesn t work then it might be necessary to have more than one package ,1
540,7633376914.0,IssuesEvent,2018-05-06 04:05:33,globaleaks/GlobaLeaks,https://api.github.com/repos/globaleaks/GlobaLeaks,opened,Remove Absolute Paths from File Uploads in Database,C: Backend F: Portability T: Bug T: Refactoring,"**Current behavior**
Currently, filepaths in the database store the full absolute path. This drastically complicates relocating GlobaLeaks installs between systems and distributions especially for Developer Mode instances, and makes tenant import/export much more complicated as the paths need to be rewritten on the fly. This same quirk exists both in receivertip files and in whistleblower files,
**Expected behavior**
GlobaLeaks should use relative paths from the attachment directory and not use absolute paths.
**Steps to reproduce the problem or feature illustration**
Upload a file to the GL feature.
**What is the motivation or use case for changing the behavior?**
Simplification of tenant import/export.
**GlobaLeaks version:** devel
**Browser:** N/A
**Server Operating System and Version (if applicable):** Gentoo
**Client Operating System and Version (if applicable):** Gentoo
",True,"Remove Absolute Paths from File Uploads in Database - **Current behavior**
Currently, filepaths in the database store the full absolute path. This drastically complicates relocating GlobaLeaks installs between systems and distributions especially for Developer Mode instances, and makes tenant import/export much more complicated as the paths need to be rewritten on the fly. This same quirk exists both in receivertip files and in whistleblower files,
**Expected behavior**
GlobaLeaks should use relative paths from the attachment directory and not use absolute paths.
**Steps to reproduce the problem or feature illustration**
Upload a file to the GL feature.
**What is the motivation or use case for changing the behavior?**
Simplification of tenant import/export.
**GlobaLeaks version:** devel
**Browser:** N/A
**Server Operating System and Version (if applicable):** Gentoo
**Client Operating System and Version (if applicable):** Gentoo
",1,remove absolute paths from file uploads in database current behavior currently filepaths in the database store the full absolute path this drastically complicates relocating globaleaks installs between systems and distributions especially for developer mode instances and makes tenant import export much more complicated as the paths need to be rewritten on the fly this same quirk exists both in receivertip files and in whistleblower files expected behavior globaleaks should use relative paths from the attachment directory and not use absolute paths steps to reproduce the problem or feature illustration upload a file to the gl feature what is the motivation or use case for changing the behavior simplification of tenant import export globaleaks version devel browser n a server operating system and version if applicable gentoo client operating system and version if applicable gentoo ,1
1422,21178163423.0,IssuesEvent,2022-04-08 03:58:07,chapel-lang/chapel,https://api.github.com/repos/chapel-lang/chapel,closed,Chapel on Macs with M1 chips,area: Compiler area: Runtime user issue type: Portability,"This issue generally asks ""How is Chapel doing on Macs with M1 chips?"" where I think we mostly don't have much experience within the core team. It would be good to be able to have access to one in-house to take stock of things.
We have had a user mention on Gitter that they were able to download from GitHub and build from source. The GASNet's team seems to reflect this as well.
At present we know:
- [x] we don't have a homebrew bottle for M1 Macs (#17910)
- brew audit on such Macs reports three issues:
- [x] [a failure related to Qthreads](https://github.com/chapel-lang/chapel/issues/17910#issuecomment-950246280)
- [x] [another related to Python](https://github.com/chapel-lang/chapel/issues/17910#issuecomment-950246280)
- [x] another related to GMP (resolved)
- [x] the command `brew install --build-from-source chapel` results in `error: What kind of a Mac is this?`
- [ ] the GASNet team is seeing a failure on their M1 Mac runs (#17825)
",True,"Chapel on Macs with M1 chips - This issue generally asks ""How is Chapel doing on Macs with M1 chips?"" where I think we mostly don't have much experience within the core team. It would be good to be able to have access to one in-house to take stock of things.
We have had a user mention on Gitter that they were able to download from GitHub and build from source. The GASNet's team seems to reflect this as well.
At present we know:
- [x] we don't have a homebrew bottle for M1 Macs (#17910)
- brew audit on such Macs reports three issues:
- [x] [a failure related to Qthreads](https://github.com/chapel-lang/chapel/issues/17910#issuecomment-950246280)
- [x] [another related to Python](https://github.com/chapel-lang/chapel/issues/17910#issuecomment-950246280)
- [x] another related to GMP (resolved)
- [x] the command `brew install --build-from-source chapel` results in `error: What kind of a Mac is this?`
- [ ] the GASNet team is seeing a failure on their M1 Mac runs (#17825)
",1,chapel on macs with chips this issue generally asks how is chapel doing on macs with chips where i think we mostly don t have much experience within the core team it would be good to be able to have access to one in house to take stock of things we have had a user mention on gitter that they were able to download from github and build from source the gasnet s team seems to reflect this as well at present we know we don t have a homebrew bottle for macs brew audit on such macs reports three issues another related to gmp resolved the command brew install build from source chapel results in error what kind of a mac is this the gasnet team is seeing a failure on their mac runs ,1
170,3883362282.0,IssuesEvent,2016-04-13 13:38:55,edenhill/librdkafka,https://api.github.com/repos/edenhill/librdkafka,closed, portability,portability,"While working on Solaris porting, we noticed that librdkafka only builds on Solaris if the SUNWhea package (kernel headers) is installed, because librdkafka is using sys/queue.h. The header is a BSD-ism, and may cause other portability problems in the future, so I thought I'd let you know.
",True," portability - While working on Solaris porting, we noticed that librdkafka only builds on Solaris if the SUNWhea package (kernel headers) is installed, because librdkafka is using sys/queue.h. The header is a BSD-ism, and may cause other portability problems in the future, so I thought I'd let you know.
",1, portability while working on solaris porting we noticed that librdkafka only builds on solaris if the sunwhea package kernel headers is installed because librdkafka is using sys queue h the header is a bsd ism and may cause other portability problems in the future so i thought i d let you know ,1
264,5094766902.0,IssuesEvent,2017-01-03 12:52:57,edenhill/librdkafka,https://api.github.com/repos/edenhill/librdkafka,closed,librdkafka on IBM machine with native compiler,portability,"# Description
I tried to build and test 32 bit librdkafka on IBM machine using native compiler. Here is what I observed:
Librdkafka-0.9.1
- I can build but rd_atomic32_get() (i.e. __sync_add_and_fetch()) operation crashes on running tests.
- I replaced __sync_add_and_fetch() operations with __fetch_and_add() operations, but I cannot compile code because long long is not permitted on __fetch_and_add() and I cannot use __fetch_and_addlp() either with 32 bit build. Then I just typecasted/typedefed int64\* to int32\* which builds but tests crash on atomic_\* operations.
Librdkafka-master
- I can build but rd_atomic32_get() (i.e. __sync_add_and_fetch()) operation crashes on running tests.
- If I undef HAVE_ATOMICS_32 and HAVE_ATOMICS_64, so that atomic operations use mutex, then it builds fine but tests hang which looks like following:
386258.376|METADATA|0001_multiobj#producer-4| 10.122.140.84:9092/bootstrap: Topic bib_rnd64b000005215_0001 partition 1 Leader 0
%7|1476386258.376|METADATA|0001_multiobj#producer-4| 10.122.140.84:9092/bootstrap: Topic bib_rnd64b000005215_0001 partition 3 Leader 2
%7|1476386258.376|METADATA|0001_multiobj#producer-4| 10.122.140.84:9092/bootstrap: Topic bib_rnd64b000005215_0001 partition 0 Leader 2
%7|1476386258.376|METADATA|0001_multiobj#producer-4| 10.122.140.84:9092/bootstrap: Requested topic bib_rnd64b000005215_0001 seen in metadata
%7|1476386259.297|PRODUCE|0001_multiobj#producer-4| mrplnjdmrsmr02:9092/2: produce messageset with 1 messages (64 bytes)
%7|1476386259.299|MSGSET|0001_multiobj#producer-4| mrplnjdmrsmr02:9092/2: MessageSet with 1 message(s) delivered
**%7|1476386259.299|REQERR|0001_multiobj#producer-4| mrplnjdmrsmr02:9092/2: ProduceRequest failed: Broker: Unknown topic or partition: explicit actions 0x4
%7|1476386259.299|MSGSET|0001_multiobj#producer-4| mrplnjdmrsmr02:9092/2: MessageSet with 1 message(s) encountered error: Broker: Unknown topic or partition (actions 0x4)**
%7|1476386259.299|METADATA|0001_multiobj#producer-4| mrplnydmrsmr02:9092/0: Request metadata for bib_rnd64b000005215_0001: leader query: scheduled: not in broker thread
%7|1476386259.377|METADATA|0001_multiobj#producer-4| mrplnydmrsmr02:9092/0: Request metadata for bib_rnd64b000005215_0001: leader query
%7|1476386259.377|METADATA|0001_multiobj#producer-4| mrplnydmrsmr02:9092/0: Request metadata for bib_rnd64b000005215_0001: leader query
%7|1476386259.377|METADATA|0001_multiobj#producer-4| mrplnydmrsmr02:9092/0: ===== Received metadata =====
%7|1476386259.378|METADATA|0001_multiobj#producer-4| mrplnydmrsmr02:9092/0: 3 brokers, 1 topics
I noticed that in response to following issue,
https://github.com/edenhill/librdkafka/issues/319
fixes were done for Solaris for memory alignment. Was the same fix ever tested for IBM machines? Any help will be appreciated.
# How to reproduce
# Checklist
Please provide the following information:
- [ ] librdkafka version (release number or git tag):
- [ ] Apache Kafka version:
- [ ] librdkafka client configuration:
- [ ] Operating system:
- [ ] Using the legacy Consumer
- [ ] Using the high-level KafkaConsumer
- [ ] Provide logs (with `debug=..` as necessary) from librdkafka
- [ ] Provide broker log excerpts
- [ ] Critical issue
",True,"librdkafka on IBM machine with native compiler - # Description
I tried to build and test 32 bit librdkafka on IBM machine using native compiler. Here is what I observed:
Librdkafka-0.9.1
- I can build but rd_atomic32_get() (i.e. __sync_add_and_fetch()) operation crashes on running tests.
- I replaced __sync_add_and_fetch() operations with __fetch_and_add() operations, but I cannot compile code because long long is not permitted on __fetch_and_add() and I cannot use __fetch_and_addlp() either with 32 bit build. Then I just typecasted/typedefed int64\* to int32\* which builds but tests crash on atomic_\* operations.
Librdkafka-master
- I can build but rd_atomic32_get() (i.e. __sync_add_and_fetch()) operation crashes on running tests.
- If I undef HAVE_ATOMICS_32 and HAVE_ATOMICS_64, so that atomic operations use mutex, then it builds fine but tests hang which looks like following:
386258.376|METADATA|0001_multiobj#producer-4| 10.122.140.84:9092/bootstrap: Topic bib_rnd64b000005215_0001 partition 1 Leader 0
%7|1476386258.376|METADATA|0001_multiobj#producer-4| 10.122.140.84:9092/bootstrap: Topic bib_rnd64b000005215_0001 partition 3 Leader 2
%7|1476386258.376|METADATA|0001_multiobj#producer-4| 10.122.140.84:9092/bootstrap: Topic bib_rnd64b000005215_0001 partition 0 Leader 2
%7|1476386258.376|METADATA|0001_multiobj#producer-4| 10.122.140.84:9092/bootstrap: Requested topic bib_rnd64b000005215_0001 seen in metadata
%7|1476386259.297|PRODUCE|0001_multiobj#producer-4| mrplnjdmrsmr02:9092/2: produce messageset with 1 messages (64 bytes)
%7|1476386259.299|MSGSET|0001_multiobj#producer-4| mrplnjdmrsmr02:9092/2: MessageSet with 1 message(s) delivered
**%7|1476386259.299|REQERR|0001_multiobj#producer-4| mrplnjdmrsmr02:9092/2: ProduceRequest failed: Broker: Unknown topic or partition: explicit actions 0x4
%7|1476386259.299|MSGSET|0001_multiobj#producer-4| mrplnjdmrsmr02:9092/2: MessageSet with 1 message(s) encountered error: Broker: Unknown topic or partition (actions 0x4)**
%7|1476386259.299|METADATA|0001_multiobj#producer-4| mrplnydmrsmr02:9092/0: Request metadata for bib_rnd64b000005215_0001: leader query: scheduled: not in broker thread
%7|1476386259.377|METADATA|0001_multiobj#producer-4| mrplnydmrsmr02:9092/0: Request metadata for bib_rnd64b000005215_0001: leader query
%7|1476386259.377|METADATA|0001_multiobj#producer-4| mrplnydmrsmr02:9092/0: Request metadata for bib_rnd64b000005215_0001: leader query
%7|1476386259.377|METADATA|0001_multiobj#producer-4| mrplnydmrsmr02:9092/0: ===== Received metadata =====
%7|1476386259.378|METADATA|0001_multiobj#producer-4| mrplnydmrsmr02:9092/0: 3 brokers, 1 topics
I noticed that in response to following issue,
https://github.com/edenhill/librdkafka/issues/319
fixes were done for Solaris for memory alignment. Was the same fix ever tested for IBM machines? Any help will be appreciated.
# How to reproduce
# Checklist
Please provide the following information:
- [ ] librdkafka version (release number or git tag):
- [ ] Apache Kafka version:
- [ ] librdkafka client configuration:
- [ ] Operating system:
- [ ] Using the legacy Consumer
- [ ] Using the high-level KafkaConsumer
- [ ] Provide logs (with `debug=..` as necessary) from librdkafka
- [ ] Provide broker log excerpts
- [ ] Critical issue
",1,librdkafka on ibm machine with native compiler description i tried to build and test bit librdkafka on ibm machine using native compiler here is what i observed librdkafka i can build but rd get i e sync add and fetch operation crashes on running tests i replaced sync add and fetch operations with fetch and add operations but i cannot compile code because long long is not permitted on fetch and add and i cannot use fetch and addlp either with bit build then i just typecasted typedefed to which builds but tests crash on atomic operations librdkafka master i can build but rd get i e sync add and fetch operation crashes on running tests if i undef have atomics and have atomics so that atomic operations use mutex then it builds fine but tests hang which looks like following metadata multiobj producer bootstrap topic bib partition leader metadata multiobj producer bootstrap topic bib partition leader metadata multiobj producer bootstrap topic bib partition leader metadata multiobj producer bootstrap requested topic bib seen in metadata produce multiobj producer produce messageset with messages bytes msgset multiobj producer messageset with message s delivered reqerr multiobj producer producerequest failed broker unknown topic or partition explicit actions msgset multiobj producer messageset with message s encountered error broker unknown topic or partition actions metadata multiobj producer request metadata for bib leader query scheduled not in broker thread metadata multiobj producer request metadata for bib leader query metadata multiobj producer request metadata for bib leader query metadata multiobj producer received metadata metadata multiobj producer brokers topics i noticed that in response to following issue fixes were done for solaris for memory alignment was the same fix ever tested for ibm machines any help will be appreciated how to reproduce checklist please provide the following information librdkafka version release number or git tag apache kafka version librdkafka client configuration operating system using the legacy consumer using the high level kafkaconsumer provide logs with debug as necessary from librdkafka provide broker log excerpts critical issue ,1
18,2613862241.0,IssuesEvent,2015-02-28 00:27:36,dotnet/roslyn,https://api.github.com/repos/dotnet/roslyn,closed,EditAndContinueTests.AnonymousTypes changes emit order in 64-bit runner,4 - In Review Area-Compilers Bug Portability,"Failure:
```
Microsoft.CodeAnalysis.VisualBasic.UnitTests.EditAndContinueTests.AnonymousTypes [FAIL]
Actual:
,
VB$AnonymousType_1`2,
VB$AnonymousType_2`1,
VB$AnonymousType_0`2,
A,
B
Differences:
,
VB$AnonymousType_1`2,
++> VB$AnonymousType_2`1,
VB$AnonymousType_0`2,
Stack Trace:
src\Test\Utilities\AssertEx.cs(208,0): at Roslyn.Test.Utilities.AssertEx.Equal[T](IEnumerable`1 expected, IEnumerable`1 actual, IEqualityComparer`1 comparer, String message, String itemSeparator, Func`2 itemInspector)
src\Compilers\VisualBasic\Test\Emit\Emit\EditAndContinueTestBase.vb(212,0): at Microsoft.CodeAnalysis.VisualBasic.UnitTests.EditAndContinueTestBase.CheckNames(MetadataReader[] readers, StringHandle[] handles, String[] expectedNames)
src\Compilers\VisualBasic\Test\Emit\Emit\EditAndContinueTestBase.vb(207,0): at Microsoft.CodeAnalysis.VisualBasic.UnitTests.EditAndContinueTestBase.CheckNames(MetadataReader reader, StringHandle[] handles, String[] expectedNames)
src\Compilers\VisualBasic\Test\Emit\Emit\EditAndContinueTests.vb(3328,0): at Microsoft.CodeAnalysis.VisualBasic.UnitTests.EditAndContinueTests.AnonymousTypes()
```
",True,"EditAndContinueTests.AnonymousTypes changes emit order in 64-bit runner - Failure:
```
Microsoft.CodeAnalysis.VisualBasic.UnitTests.EditAndContinueTests.AnonymousTypes [FAIL]
Actual:
,
VB$AnonymousType_1`2,
VB$AnonymousType_2`1,
VB$AnonymousType_0`2,
A,
B
Differences:
,
VB$AnonymousType_1`2,
++> VB$AnonymousType_2`1,
VB$AnonymousType_0`2,
Stack Trace:
src\Test\Utilities\AssertEx.cs(208,0): at Roslyn.Test.Utilities.AssertEx.Equal[T](IEnumerable`1 expected, IEnumerable`1 actual, IEqualityComparer`1 comparer, String message, String itemSeparator, Func`2 itemInspector)
src\Compilers\VisualBasic\Test\Emit\Emit\EditAndContinueTestBase.vb(212,0): at Microsoft.CodeAnalysis.VisualBasic.UnitTests.EditAndContinueTestBase.CheckNames(MetadataReader[] readers, StringHandle[] handles, String[] expectedNames)
src\Compilers\VisualBasic\Test\Emit\Emit\EditAndContinueTestBase.vb(207,0): at Microsoft.CodeAnalysis.VisualBasic.UnitTests.EditAndContinueTestBase.CheckNames(MetadataReader reader, StringHandle[] handles, String[] expectedNames)
src\Compilers\VisualBasic\Test\Emit\Emit\EditAndContinueTests.vb(3328,0): at Microsoft.CodeAnalysis.VisualBasic.UnitTests.EditAndContinueTests.AnonymousTypes()
```
",1,editandcontinuetests anonymoustypes changes emit order in bit runner failure microsoft codeanalysis visualbasic unittests editandcontinuetests anonymoustypes actual vb anonymoustype vb anonymoustype vb anonymoustype a b differences vb anonymoustype vb anonymoustype vb anonymoustype stack trace src test utilities assertex cs at roslyn test utilities assertex equal ienumerable expected ienumerable actual iequalitycomparer comparer string message string itemseparator func iteminspector src compilers visualbasic test emit emit editandcontinuetestbase vb at microsoft codeanalysis visualbasic unittests editandcontinuetestbase checknames metadatareader readers stringhandle handles string expectednames src compilers visualbasic test emit emit editandcontinuetestbase vb at microsoft codeanalysis visualbasic unittests editandcontinuetestbase checknames metadatareader reader stringhandle handles string expectednames src compilers visualbasic test emit emit editandcontinuetests vb at microsoft codeanalysis visualbasic unittests editandcontinuetests anonymoustypes huboard order milestone order custom state ,1
20414,3354495079.0,IssuesEvent,2015-11-18 12:29:10,hazelcast/hazelcast,https://api.github.com/repos/hazelcast/hazelcast,opened,Data loss occurs when member restarts for merge,Team: Core Type: Defect,"Here is the scenario for the issue:
1. A cluster with 7 members (6 lite, 1 regular) sets up. Configured merge policy for maps is hz.ADD_NEW_ENTRY.
2. Split brain occurs with 4 lite members to 2 lite + 1 regular members.
3. After join, smaller cluster with three members join to larger one. All three members restart for merge.
4. Data loss occurs after merge.
The expected behaviour is to recover all data after merge operations.",1.0,"Data loss occurs when member restarts for merge - Here is the scenario for the issue:
1. A cluster with 7 members (6 lite, 1 regular) sets up. Configured merge policy for maps is hz.ADD_NEW_ENTRY.
2. Split brain occurs with 4 lite members to 2 lite + 1 regular members.
3. After join, smaller cluster with three members join to larger one. All three members restart for merge.
4. Data loss occurs after merge.
The expected behaviour is to recover all data after merge operations.",0,data loss occurs when member restarts for merge here is the scenario for the issue a cluster with members lite regular sets up configured merge policy for maps is hz add new entry split brain occurs with lite members to lite regular members after join smaller cluster with three members join to larger one all three members restart for merge data loss occurs after merge the expected behaviour is to recover all data after merge operations ,0
499917,14482424206.0,IssuesEvent,2020-12-10 13:58:07,traefik/traefik,https://api.github.com/repos/traefik/traefik,closed,IngressRoute resource allows cross-namespace routing by default,area/provider/k8s/crd kind/bug/confirmed priority/P1,"
### Do you want to request a *feature* or report a *bug*?
Bug
### What did you do?
This is a replication and investigation report for the issue reported in https://github.com/containous/traefik/issues/7151 and discussed on the community forums https://community.traefik.io/t/cross-namespaces-ingressroutes-and-services/7419/19
- Prior to Traefik v2.1, `IngressRoute` was not capable of routing to services outside of its own ""root"" namespace
- This limitation appears to coincide with the constraints on K8S native `Ingress` based on an [issue reported](https://github.com/containous/traefik/issues/5748#issuecomment-547323939) running Traefik 2.0.4
- This limitation was removed with the introduction of [this PR](https://github.com/traefik/traefik/pull/5711), and shipped with Traefik v2.1 to current.
- While a user can [define which namespaces can be watched](https://doc.traefik.io/traefik/providers/kubernetes-crd/#namespaces), this does not constrain the user's ability to restrict `IngressRoute` to their respective namespaces, which is based on an expectation set by native K8S `Ingress` objects and the behavior of `IngressRoute` prior to 2.1.
- This behavior is specific to Traefik and `IngressRoute` and I could reproduce the regressed behavior on v2.0, and produce the cross-namespace behavior on 2.3 on K8S 1.16, 1.17, and 1.18 (this is not something K8S will restrict)
### What did you expect to see?
`IngressRoute` provider configuration to contain a flag that enables cross-namespace routing behavior (this is a worthwhile feature, IMO).
### What did you see instead?
By default, `IngressRoute` will allow users to cross-namespace boundaries even though the user would expect this behavior to be disallowed given current constraints on `Ingress`
### Output of `traefik version`: (_What version of Traefik are you using?_)
* Traefik 2.0.4
* Traefik 2.3
### What is your environment & configuration (arguments, toml, provider, platform, ...)?
See original issue for environment / configuration. Additional tests were run on:
* Kubernetes 1.16
* Kubernetes 1.17
### If applicable, please paste the log output in DEBUG level (`--log.level=DEBUG` switch)
n/a
",1.0,"IngressRoute resource allows cross-namespace routing by default -
### Do you want to request a *feature* or report a *bug*?
Bug
### What did you do?
This is a replication and investigation report for the issue reported in https://github.com/containous/traefik/issues/7151 and discussed on the community forums https://community.traefik.io/t/cross-namespaces-ingressroutes-and-services/7419/19
- Prior to Traefik v2.1, `IngressRoute` was not capable of routing to services outside of its own ""root"" namespace
- This limitation appears to coincide with the constraints on K8S native `Ingress` based on an [issue reported](https://github.com/containous/traefik/issues/5748#issuecomment-547323939) running Traefik 2.0.4
- This limitation was removed with the introduction of [this PR](https://github.com/traefik/traefik/pull/5711), and shipped with Traefik v2.1 to current.
- While a user can [define which namespaces can be watched](https://doc.traefik.io/traefik/providers/kubernetes-crd/#namespaces), this does not constrain the user's ability to restrict `IngressRoute` to their respective namespaces, which is based on an expectation set by native K8S `Ingress` objects and the behavior of `IngressRoute` prior to 2.1.
- This behavior is specific to Traefik and `IngressRoute` and I could reproduce the regressed behavior on v2.0, and produce the cross-namespace behavior on 2.3 on K8S 1.16, 1.17, and 1.18 (this is not something K8S will restrict)
### What did you expect to see?
`IngressRoute` provider configuration to contain a flag that enables cross-namespace routing behavior (this is a worthwhile feature, IMO).
### What did you see instead?
By default, `IngressRoute` will allow users to cross-namespace boundaries even though the user would expect this behavior to be disallowed given current constraints on `Ingress`
### Output of `traefik version`: (_What version of Traefik are you using?_)
* Traefik 2.0.4
* Traefik 2.3
### What is your environment & configuration (arguments, toml, provider, platform, ...)?
See original issue for environment / configuration. Additional tests were run on:
* Kubernetes 1.16
* Kubernetes 1.17
### If applicable, please paste the log output in DEBUG level (`--log.level=DEBUG` switch)
n/a
",0,ingressroute resource allows cross namespace routing by default do you want to request a feature or report a bug do not file issues for general support questions the issue tracker is for reporting bugs and feature requests only for end user related support questions please refer to one of the following the traefik community forum bug the configurations between x and x are not compatible please have a look here what did you do this is a replication and investigation report for the issue reported in and discussed on the community forums prior to traefik ingressroute was not capable of routing to services outside of its own root namespace this limitation appears to coincide with the constraints on native ingress based on an running traefik this limitation was removed with the introduction of and shipped with traefik to current while a user can this does not constrain the user s ability to restrict ingressroute to their respective namespaces which is based on an expectation set by native ingress objects and the behavior of ingressroute prior to this behavior is specific to traefik and ingressroute and i could reproduce the regressed behavior on and produce the cross namespace behavior on on and this is not something will restrict how to write a good bug report respect the issue template as much as possible the title should be short and descriptive explain the conditions which led you to report this issue the context the context should lead to something an idea or a problem that you’re facing remain clear and concise format your messages to help the reader focus on what matters and understand the structure of your message use markdown syntax what did you expect to see ingressroute provider configuration to contain a flag that enables cross namespace routing behavior this is a worthwhile feature imo what did you see instead by default ingressroute will allow users to cross namespace boundaries even though the user would expect this behavior to be disallowed given current constraints on ingress output of traefik version what version of traefik are you using traefik traefik latest is not considered as a valid version for the traefik docker image docker run version ex docker run traefik version what is your environment configuration arguments toml provider platform see original issue for environment configuration additional tests were run on kubernetes kubernetes add more configuration information here if applicable please paste the log output in debug level log level debug switch n a ,0
620,8373602162.0,IssuesEvent,2018-10-05 10:57:36,edenhill/librdkafka,https://api.github.com/repos/edenhill/librdkafka,closed,librdkafka installation on Windows,enhancement portability windows,"Description
===========
I am trying to install https://github.com/confluentinc/confluent-kafka-python on windows. This package requires librdkafka.
So I am looking out for following things
1. How librdkafka installed on Windows?
2. Which branch should be used for installation on winodws?
How to reproduce
================
Checklist
=========
Please provide the following information:
- [ ] librdkafka version (release number or git tag):
- [ ] Apache Kafka version:
- [ ] librdkafka client configuration:
- [ ] Operating system:
- [ ] Using the legacy Consumer
- [ ] Using the high-level KafkaConsumer
- [ ] Provide logs (with `debug=..` as necessary) from librdkafka
- [ ] Provide broker log excerpts
- [ ] Critical issue
",True,"librdkafka installation on Windows - Description
===========
I am trying to install https://github.com/confluentinc/confluent-kafka-python on windows. This package requires librdkafka.
So I am looking out for following things
1. How librdkafka installed on Windows?
2. Which branch should be used for installation on winodws?
How to reproduce
================
Checklist
=========
Please provide the following information:
- [ ] librdkafka version (release number or git tag):
- [ ] Apache Kafka version:
- [ ] librdkafka client configuration:
- [ ] Operating system:
- [ ] Using the legacy Consumer
- [ ] Using the high-level KafkaConsumer
- [ ] Provide logs (with `debug=..` as necessary) from librdkafka
- [ ] Provide broker log excerpts
- [ ] Critical issue
",1,librdkafka installation on windows description i am trying to install on windows this package requires librdkafka so i am looking out for following things how librdkafka installed on windows which branch should be used for installation on winodws how to reproduce checklist please provide the following information librdkafka version release number or git tag apache kafka version librdkafka client configuration operating system using the legacy consumer using the high level kafkaconsumer provide logs with debug as necessary from librdkafka provide broker log excerpts critical issue ,1
113419,24416067818.0,IssuesEvent,2022-10-05 15:59:03,dwp/design-system,https://api.github.com/repos/dwp/design-system,closed,Find an address: Designs,🔗 component find an address/postcode,"## What
Sub task for Find my address by postcode/house number #237. Designing the journey and screens where typeahead/autocomplete is not used i.e. a non JavaScript journey.
## Why
After discussion with Sarat from common capabilities team working on the the SRA formerly (ARA) lookup, there appears to be deprioritised looking into research for the component. The address lookup requires further development. In the mean time there are examples of address lookup being used which could be published and and in the future be enhanced by any typeahead functionality.
## Done when
- [x] Reach out to designers using the find an address component and gather insights
- [x] Analyse insights and rough out steps and fields for a user journey
- [x] Review with design system team
- [x] Update design content and layout based on review
- [x] Document design decisions
- [x] Build pages
- [x] Collate feedback from designers and accessibility team on solution
- [x] Update designs based on feedback
## Outcomes
- Designs for a find an address journey
## Who needs to know about this
- Design System team
- Design community
- Accessibility team
## Related stories
#237
## Anything else
[Mural board](https://app.mural.co/t/dwpdigital7412/m/dwpdigital7412/1657636707008/e8e422d466ec6f13e6b101899d14a9cf630b0500?sender=ub2947a7492ef409e91952034)
",1.0,"Find an address: Designs - ## What
Sub task for Find my address by postcode/house number #237. Designing the journey and screens where typeahead/autocomplete is not used i.e. a non JavaScript journey.
## Why
After discussion with Sarat from common capabilities team working on the the SRA formerly (ARA) lookup, there appears to be deprioritised looking into research for the component. The address lookup requires further development. In the mean time there are examples of address lookup being used which could be published and and in the future be enhanced by any typeahead functionality.
## Done when
- [x] Reach out to designers using the find an address component and gather insights
- [x] Analyse insights and rough out steps and fields for a user journey
- [x] Review with design system team
- [x] Update design content and layout based on review
- [x] Document design decisions
- [x] Build pages
- [x] Collate feedback from designers and accessibility team on solution
- [x] Update designs based on feedback
## Outcomes
- Designs for a find an address journey
## Who needs to know about this
- Design System team
- Design community
- Accessibility team
## Related stories
#237
## Anything else
[Mural board](https://app.mural.co/t/dwpdigital7412/m/dwpdigital7412/1657636707008/e8e422d466ec6f13e6b101899d14a9cf630b0500?sender=ub2947a7492ef409e91952034)
",0,find an address designs what sub task for find my address by postcode house number designing the journey and screens where typeahead autocomplete is not used i e a non javascript journey why after discussion with sarat from common capabilities team working on the the sra formerly ara lookup there appears to be deprioritised looking into research for the component the address lookup requires further development in the mean time there are examples of address lookup being used which could be published and and in the future be enhanced by any typeahead functionality done when reach out to designers using the find an address component and gather insights analyse insights and rough out steps and fields for a user journey review with design system team update design content and layout based on review document design decisions build pages collate feedback from designers and accessibility team on solution update designs based on feedback outcomes designs for a find an address journey who needs to know about this design system team design community accessibility team related stories anything else ,0
121491,12127384234.0,IssuesEvent,2020-04-22 18:37:59,onaio/gisida,https://api.github.com/repos/onaio/gisida,opened,Add Docs for Gisida Sprint Cycle,:water_buffalo: core documentation project-management,"As a project manager and/or developer, I should be able to refer to documentation in the `/docs` folder about how the Gisida Sprints are structured.
Epic: https://github.com/onaio/gisida/issues/454",1.0,"Add Docs for Gisida Sprint Cycle - As a project manager and/or developer, I should be able to refer to documentation in the `/docs` folder about how the Gisida Sprints are structured.
Epic: https://github.com/onaio/gisida/issues/454",0,add docs for gisida sprint cycle as a project manager and or developer i should be able to refer to documentation in the docs folder about how the gisida sprints are structured epic ,0
584299,17411157540.0,IssuesEvent,2021-08-03 12:31:12,GSG-G10/Hakuna-Matata,https://api.github.com/repos/GSG-G10/Hakuna-Matata,opened,Files structure,priority-0,"## Initializing Files structure for the project and Link files together
```
├── images
├── public
│ ├── index.html
│ └── index.js
│ └── xhr.js
│ └── dom.js
├── src
│ ├── server.js
│ └── router.js
│ └── data.json
│ └── handlers
│ ├── index.js
│ └── homeHandler.js
│ └── publicHandler.js
│ └── searchHandler.js
├── images
│ └── test.js
├── .gitignore
├── README.md
└── package.json
```",1.0,"Files structure - ## Initializing Files structure for the project and Link files together
```
├── images
├── public
│ ├── index.html
│ └── index.js
│ └── xhr.js
│ └── dom.js
├── src
│ ├── server.js
│ └── router.js
│ └── data.json
│ └── handlers
│ ├── index.js
│ └── homeHandler.js
│ └── publicHandler.js
│ └── searchHandler.js
├── images
│ └── test.js
├── .gitignore
├── README.md
└── package.json
```",0,files structure initializing files structure for the project and link files together ├── images ├── public │ ├── index html │ └── index js │ └── xhr js │ └── dom js ├── src │ ├── server js │ └── router js │ └── data json │ └── handlers │ ├── index js │ └── homehandler js │ └── publichandler js │ └── searchhandler js ├── images │ └── test js ├── gitignore ├── readme md └── package json ,0
1421,21165579158.0,IssuesEvent,2022-04-07 13:20:05,edenhill/librdkafka,https://api.github.com/repos/edenhill/librdkafka,closed,[RFE] symbol versionning,enhancement portability,"Hi,
I notice librdkafka use symbol visibility/versionning using ` --version-script=librdkafka.lds`
But sadly, all symbols are in the ""global"" section.
Will be nice to have symbols listed per version
Ex, from http://rpms.remirepo.net/compat_reports/librdkafka/0.9.5_to_0.11.0-RC2/compat_report.html
Upcomming version 0.11 introduce 15 new symbols.
Example for libxml2 => https://git.gnome.org/browse/libxml2/tree/libxml2.syms
This will be useful for downstream distribution, as this can be auto-translated to dependencies (at least RPM does this)
```
$ rpm -q --requires php-xml
libxml2.so.2()(64bit)
libxml2.so.2(LIBXML2_2.4.30)(64bit)
libxml2.so.2(LIBXML2_2.5.0)(64bit)
libxml2.so.2(LIBXML2_2.5.2)(64bit)
libxml2.so.2(LIBXML2_2.5.4)(64bit)
...
$ rpm -q --requires php-pecl-rdkafka afka
librdkafka.so.1()(64bit)
```
So this ensure application using librdkafka (ex pecl/librdkafka) won't be installed without the proper version (soname is not enough, ok for dropped symbols, when soname is bump, not for added symbols, when soname doesn't change)
For now, the workaround is to ensure runtime version >= buildtime version using a manually added dependency (in the spec file).
```
%global buildver %(pkg-config --silence-errors --modversion rdkafka 2>/dev/null || echo 65536)
Requires: librdkafka%{?_isa} >= %{buildver}
```",True,"[RFE] symbol versionning - Hi,
I notice librdkafka use symbol visibility/versionning using ` --version-script=librdkafka.lds`
But sadly, all symbols are in the ""global"" section.
Will be nice to have symbols listed per version
Ex, from http://rpms.remirepo.net/compat_reports/librdkafka/0.9.5_to_0.11.0-RC2/compat_report.html
Upcomming version 0.11 introduce 15 new symbols.
Example for libxml2 => https://git.gnome.org/browse/libxml2/tree/libxml2.syms
This will be useful for downstream distribution, as this can be auto-translated to dependencies (at least RPM does this)
```
$ rpm -q --requires php-xml
libxml2.so.2()(64bit)
libxml2.so.2(LIBXML2_2.4.30)(64bit)
libxml2.so.2(LIBXML2_2.5.0)(64bit)
libxml2.so.2(LIBXML2_2.5.2)(64bit)
libxml2.so.2(LIBXML2_2.5.4)(64bit)
...
$ rpm -q --requires php-pecl-rdkafka afka
librdkafka.so.1()(64bit)
```
So this ensure application using librdkafka (ex pecl/librdkafka) won't be installed without the proper version (soname is not enough, ok for dropped symbols, when soname is bump, not for added symbols, when soname doesn't change)
For now, the workaround is to ensure runtime version >= buildtime version using a manually added dependency (in the spec file).
```
%global buildver %(pkg-config --silence-errors --modversion rdkafka 2>/dev/null || echo 65536)
Requires: librdkafka%{?_isa} >= %{buildver}
```",1, symbol versionning hi i notice librdkafka use symbol visibility versionning using version script librdkafka lds but sadly all symbols are in the global section will be nice to have symbols listed per version ex from upcomming version introduce new symbols example for this will be useful for downstream distribution as this can be auto translated to dependencies at least rpm does this rpm q requires php xml so so so so so rpm q requires php pecl rdkafka afka librdkafka so so this ensure application using librdkafka ex pecl librdkafka won t be installed without the proper version soname is not enough ok for dropped symbols when soname is bump not for added symbols when soname doesn t change for now the workaround is to ensure runtime version buildtime version using a manually added dependency in the spec file global buildver pkg config silence errors modversion rdkafka dev null echo requires librdkafka isa buildver ,1
589,7986153974.0,IssuesEvent,2018-07-19 00:20:28,rust-lang-nursery/stdsimd,https://api.github.com/repos/rust-lang-nursery/stdsimd,closed,Portable instructions are not inlined properly in PowerPC,A-portable A-powerpc bug,"See https://github.com/rust-lang-nursery/stdsimd/pull/447#issuecomment-389258789
```shell
disassembly for coresimd::coresimd::powerpc::altivec::sealed::assert_vec_add_bc_sc_vaddubm::vec_add_bc_sc_shim:
0: addis r2,r12,16
1: addi r2,r2,31904
2: mflr r0
3: std r0,16(r1)
4: stdu r1,-160(r1)
5: std r30,144(r1)
6: li r3,128
7: addi r30,r1,112
8: stxvd2x vs63,r1,r3
9: mr r3,r30
10: vmr v31,v3
11: bl 10220 <_ZN65_$LT$T$u20$as$u20$coresimd..coresimd..ppsv..IntoBits$LT$U$GT$$GT$9into_bits17h7dd9bb155da76093E>
12: lvx v2,0,r30
13: li r3,128
14: ld r30,144(r1)
15: vaddubm v2,v2,v31
16: lxvd2x vs63,r1,r3
17: addi r1,r1,160
18: ld r0,16(r1)
19: mtlr r0
20: blr
21:
thread 'coresimd::powerpc::altivec::sealed::assert_vec_add_bc_sc_vaddubm' panicked at 'instruction found, but the disassembly contains too many instructions: #instructions = 22 >= 20 (limit)', crates/stdsimd-test/src/lib.rs:385:9
```
@alexcrichton wrote:
```llvm-ir
target triple = ""powerpc64le-unknown-linux-gnu""
define internal void @foo(i32*, i32* %self) {
start:
%1 = load i32, i32* %self
store i32 %1, i32* %0
ret void
}
define void @bar(i32* %a, i32* %b) #0 {
start:
tail call void @foo(i32* %a, i32* %b)
ret void
}
attributes #0 = { ""target-features""=""+altivec"" }
```
> That's the fully optimized IR and it won't optimize any further. I don't know enough about PowerPC to know whether this is an LLVM bug or not.",True,"Portable instructions are not inlined properly in PowerPC - See https://github.com/rust-lang-nursery/stdsimd/pull/447#issuecomment-389258789
```shell
disassembly for coresimd::coresimd::powerpc::altivec::sealed::assert_vec_add_bc_sc_vaddubm::vec_add_bc_sc_shim:
0: addis r2,r12,16
1: addi r2,r2,31904
2: mflr r0
3: std r0,16(r1)
4: stdu r1,-160(r1)
5: std r30,144(r1)
6: li r3,128
7: addi r30,r1,112
8: stxvd2x vs63,r1,r3
9: mr r3,r30
10: vmr v31,v3
11: bl 10220 <_ZN65_$LT$T$u20$as$u20$coresimd..coresimd..ppsv..IntoBits$LT$U$GT$$GT$9into_bits17h7dd9bb155da76093E>
12: lvx v2,0,r30
13: li r3,128
14: ld r30,144(r1)
15: vaddubm v2,v2,v31
16: lxvd2x vs63,r1,r3
17: addi r1,r1,160
18: ld r0,16(r1)
19: mtlr r0
20: blr
21:
thread 'coresimd::powerpc::altivec::sealed::assert_vec_add_bc_sc_vaddubm' panicked at 'instruction found, but the disassembly contains too many instructions: #instructions = 22 >= 20 (limit)', crates/stdsimd-test/src/lib.rs:385:9
```
@alexcrichton wrote:
```llvm-ir
target triple = ""powerpc64le-unknown-linux-gnu""
define internal void @foo(i32*, i32* %self) {
start:
%1 = load i32, i32* %self
store i32 %1, i32* %0
ret void
}
define void @bar(i32* %a, i32* %b) #0 {
start:
tail call void @foo(i32* %a, i32* %b)
ret void
}
attributes #0 = { ""target-features""=""+altivec"" }
```
> That's the fully optimized IR and it won't optimize any further. I don't know enough about PowerPC to know whether this is an LLVM bug or not.",1,portable instructions are not inlined properly in powerpc see shell disassembly for coresimd coresimd powerpc altivec sealed assert vec add bc sc vaddubm vec add bc sc shim addis addi mflr std stdu std li addi mr vmr bl lvx li ld vaddubm addi ld mtlr blr thread coresimd powerpc altivec sealed assert vec add bc sc vaddubm panicked at instruction found but the disassembly contains too many instructions instructions limit crates stdsimd test src lib rs alexcrichton wrote llvm ir target triple unknown linux gnu define internal void foo self start load self store ret void define void bar a b start tail call void foo a b ret void attributes target features altivec that s the fully optimized ir and it won t optimize any further i don t know enough about powerpc to know whether this is an llvm bug or not ,1
637,8557155645.0,IssuesEvent,2018-11-08 15:04:11,plotly/dash-table,https://api.github.com/repos/plotly/dash-table,closed,JS facing package does not expose the DataTable component,Attribute: Supportability Type: Maintenance,"Make it possible to use the DataTable easily from Javascript.
Currently it needs to be loaded like this:
```
import 'dash-table/dash_table/bundle';
import domReady from './domReady';
const DataTable = window.dash_table.DataTable;
ReactDOM.render( ({ id: i, name: i }))} />, document.getElementById('root'));
```
We want to be able to do this:
```
import DataTable from 'dash-table';
// etc...
```",True,"JS facing package does not expose the DataTable component - Make it possible to use the DataTable easily from Javascript.
Currently it needs to be loaded like this:
```
import 'dash-table/dash_table/bundle';
import domReady from './domReady';
const DataTable = window.dash_table.DataTable;
ReactDOM.render( ({ id: i, name: i }))} />, document.getElementById('root'));
```
We want to be able to do this:
```
import DataTable from 'dash-table';
// etc...
```",1,js facing package does not expose the datatable component make it possible to use the datatable easily from javascript currently it needs to be loaded like this import dash table dash table bundle import domready from domready const datatable window dash table datatable reactdom render id i name i document getelementbyid root we want to be able to do this import datatable from dash table etc ,1
244,4805702984.0,IssuesEvent,2016-11-02 16:40:03,jemalloc/jemalloc,https://api.github.com/repos/jemalloc/jemalloc,closed,Issues compiling under CYGWIN,portability,"There are several issues compiling under CYGWIN
```
$ uname -a
CYGWIN_NT-10.0 host 2.2.1(0.289/5/3) 2015-08-20 11:42 x86_64 Cygwin
```
Namely:
1. there are no `` under cygwin. need wrap it with `#ifndef __CYGWIN__`
2. issues with RTLD_NEXT usage, it is not supported under CYGWIN.
3. Complain about `error ""No madvise(2) flag defined for purging unused dirty pages.""`
Full Log:
```
$ ./configure
checking for xsltproc... false
checking for gcc... gcc
checking whether the C compiler works... yes
checking for C compiler default output file name... a.exe
checking for suffix of executables... .exe
checking whether we are cross compiling... no
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ISO C89... none needed
checking whether compiler supports -std=gnu99... yes
checking whether compiler supports -Wall... yes
checking whether compiler supports -Werror=declaration-after-statement... yes
checking whether compiler supports -pipe... yes
checking whether compiler supports -g3... yes
checking how to run the C preprocessor... gcc -E
checking for grep that handles long lines and -e... /usr/bin/grep
checking for egrep... /usr/bin/grep -E
checking for ANSI C header files... yes
checking for sys/types.h... yes
checking for sys/stat.h... yes
checking for stdlib.h... yes
checking for string.h... yes
checking for memory.h... yes
checking for strings.h... yes
checking for inttypes.h... yes
checking for stdint.h... yes
checking for unistd.h... yes
checking whether byte ordering is bigendian... no
checking size of void *... 8
checking size of int... 4
checking size of long... 8
checking size of intmax_t... 8
checking build system type... x86_64-unknown-cygwin
checking host system type... x86_64-unknown-cygwin
checking whether pause instruction is compilable... yes
checking for ar... ar
checking malloc.h usability... yes
checking malloc.h presence... yes
checking for malloc.h... yes
checking whether malloc_usable_size definition can use const argument... no
checking whether __attribute__ syntax is compilable... yes
checking whether compiler supports -Werror... yes
checking whether tls_model attribute is compilable... yes
checking whether compiler supports -Werror... yes
checking whether alloc_size attribute is compilable... yes
checking whether compiler supports -Werror... yes
checking whether format(gnu_printf, ...) attribute is compilable... yes
checking whether compiler supports -Werror... yes
checking whether format(printf, ...) attribute is compilable... yes
checking for a BSD-compatible install... /usr/bin/install -c
checking for ranlib... ranlib
checking for ld... /usr/bin/ld
checking for autoconf... /usr/bin/autoconf
checking for memalign... yes
checking for valloc... yes
checking whether compiler supports -O3... yes
checking whether compiler supports -funroll-loops... yes
checking configured backtracing method... N/A
checking for sbrk... yes
checking whether utrace(2) is compilable... no
checking whether valgrind is compilable... no
checking whether a program using __builtin_ffsl is compilable... yes
checking LG_PAGE... 16
Missing VERSION file, and unable to generate it; creating bogus VERSION
checking for library containing clock_gettime... none required
checking for secure_getenv... no
checking for issetugid... yes
checking for _malloc_thread_cleanup... no
checking for _pthread_mutex_init_calloc_cb... no
Forcing lazy-lock to avoid allocator/threading bootstrap issues
Forcing no TLS to avoid allocator/threading bootstrap issues
checking whether C11 atomics is compilable... no
checking whether atomic(9) is compilable... no
checking whether Darwin OSAtomic*() is compilable... no
checking whether madvise(2) is compilable... yes
checking whether to force 32-bit __sync_{add,sub}_and_fetch()... no
checking whether to force 64-bit __sync_{add,sub}_and_fetch()... no
checking for __builtin_clz... yes
checking whether Darwin OSSpin*() is compilable... no
checking whether glibc malloc hook is compilable... no
checking whether glibc memalign hook is compilable... no
checking whether pthreads adaptive mutexes is compilable... no
checking for stdbool.h that conforms to C99... yes
checking for _Bool... yes
configure: creating ./config.status
config.status: creating Makefile
config.status: creating jemalloc.pc
config.status: creating doc/html.xsl
config.status: creating doc/manpages.xsl
config.status: creating doc/jemalloc.xml
config.status: creating include/jemalloc/jemalloc_macros.h
config.status: creating include/jemalloc/jemalloc_protos.h
config.status: creating include/jemalloc/jemalloc_typedefs.h
config.status: creating include/jemalloc/internal/jemalloc_internal.h
config.status: creating test/test.sh
config.status: creating test/include/test/jemalloc_test.h
config.status: creating config.stamp
config.status: creating bin/jemalloc-config
config.status: creating bin/jemalloc.sh
config.status: creating bin/jeprof
config.status: creating include/jemalloc/jemalloc_defs.h
config.status: creating include/jemalloc/internal/jemalloc_internal_defs.h
config.status: creating test/include/test/jemalloc_test_defs.h
config.status: test/include/test/jemalloc_test_defs.h is unchanged
config.status: executing include/jemalloc/internal/private_namespace.h commands
config.status: executing include/jemalloc/internal/private_unnamespace.h commands
config.status: executing include/jemalloc/internal/public_symbols.txt commands
config.status: executing include/jemalloc/internal/public_namespace.h commands
config.status: executing include/jemalloc/internal/public_unnamespace.h commands
config.status: executing include/jemalloc/internal/size_classes.h commands
config.status: executing include/jemalloc/jemalloc_protos_jet.h commands
config.status: executing include/jemalloc/jemalloc_rename.h commands
config.status: executing include/jemalloc/jemalloc_mangle.h commands
config.status: executing include/jemalloc/jemalloc_mangle_jet.h commands
config.status: executing include/jemalloc/jemalloc.h commands
===============================================================================
jemalloc version : 0.0.0-0-g0000000000000000000000000000000000000000
library revision : 2
CONFIG :
CC : gcc
CFLAGS : -std=gnu99 -Wall -Werror=declaration-after-statement -pipe -g3 -O3 -funroll-loops
CPPFLAGS : -D_REENTRANT
LDFLAGS :
EXTRA_LDFLAGS :
LIBS :
TESTLIBS :
RPATH_EXTRA :
XSLTPROC : false
XSLROOT :
PREFIX : /usr/local
BINDIR : /usr/local/bin
DATADIR : /usr/local/share
INCLUDEDIR : /usr/local/include
LIBDIR : /usr/local/lib
MANDIR : /usr/local/share/man
srcroot :
abs_srcroot : /home/sunyc_000/src/fluffos/src/thirdparty/jemalloc/
objroot :
abs_objroot : /home/sunyc_000/src/fluffos/src/thirdparty/jemalloc/
JEMALLOC_PREFIX : je_
JEMALLOC_PRIVATE_NAMESPACE
: je_
install_suffix :
autogen : 0
cc-silence : 1
debug : 0
code-coverage : 0
stats : 1
prof : 0
prof-libunwind : 0
prof-libgcc : 0
prof-gcc : 0
tcache : 1
fill : 1
utrace : 0
valgrind : 0
xmalloc : 0
munmap : 1
lazy_lock : 1
tls : 0
cache-oblivious : 1
===============================================================================
sunyc_000@vivien-surface ~/src/fluffos/src/thirdparty/jemalloc
$ make
gcc -std=gnu99 -Wall -Werror=declaration-after-statement -pipe -g3 -O3 -funroll-loops -c -D_REENTRANT -Iinclude -Iinclude -o src/jemalloc.o src/jemalloc.c
In file included from include/jemalloc/internal/jemalloc_internal.h:5:0,
from src/jemalloc.c:2:
include/jemalloc/internal/jemalloc_internal_decls.h:13:29: fatal error: sys/syscall.h: No such file or directory
# include
^
compilation terminated.
Makefile:243: recipe for target 'src/jemalloc.o' failed
make: *** [src/jemalloc.o] Error 1
sunyc_000@vivien-surface ~/src/fluffos/src/thirdparty/jemalloc
$ vi include/jemalloc/internal/jemalloc_internal_decls.h
sunyc_000@vivien-surface ~/src/fluffos/src/thirdparty/jemalloc
$ make
gcc -std=gnu99 -Wall -Werror=declaration-after-statement -pipe -g3 -O3 -funroll-loops -c -D_REENTRANT -Iinclude -Iinclude -o src/jemalloc.o src/jemalloc.c
gcc -std=gnu99 -Wall -Werror=declaration-after-statement -pipe -g3 -O3 -funroll-loops -c -D_REENTRANT -Iinclude -Iinclude -o src/arena.o src/arena.c
gcc -std=gnu99 -Wall -Werror=declaration-after-statement -pipe -g3 -O3 -funroll-loops -c -D_REENTRANT -Iinclude -Iinclude -o src/atomic.o src/atomic.c
gcc -std=gnu99 -Wall -Werror=declaration-after-statement -pipe -g3 -O3 -funroll-loops -c -D_REENTRANT -Iinclude -Iinclude -o src/base.o src/base.c
gcc -std=gnu99 -Wall -Werror=declaration-after-statement -pipe -g3 -O3 -funroll-loops -c -D_REENTRANT -Iinclude -Iinclude -o src/bitmap.o src/bitmap.c
gcc -std=gnu99 -Wall -Werror=declaration-after-statement -pipe -g3 -O3 -funroll-loops -c -D_REENTRANT -Iinclude -Iinclude -o src/chunk.o src/chunk.c
gcc -std=gnu99 -Wall -Werror=declaration-after-statement -pipe -g3 -O3 -funroll-loops -c -D_REENTRANT -Iinclude -Iinclude -o src/chunk_dss.o src/chunk_dss.c
gcc -std=gnu99 -Wall -Werror=declaration-after-statement -pipe -g3 -O3 -funroll-loops -c -D_REENTRANT -Iinclude -Iinclude -o src/chunk_mmap.o src/chunk_mmap.c
gcc -std=gnu99 -Wall -Werror=declaration-after-statement -pipe -g3 -O3 -funroll-loops -c -D_REENTRANT -Iinclude -Iinclude -o src/ckh.o src/ckh.c
gcc -std=gnu99 -Wall -Werror=declaration-after-statement -pipe -g3 -O3 -funroll-loops -c -D_REENTRANT -Iinclude -Iinclude -o src/ctl.o src/ctl.c
gcc -std=gnu99 -Wall -Werror=declaration-after-statement -pipe -g3 -O3 -funroll-loops -c -D_REENTRANT -Iinclude -Iinclude -o src/extent.o src/extent.c
gcc -std=gnu99 -Wall -Werror=declaration-after-statement -pipe -g3 -O3 -funroll-loops -c -D_REENTRANT -Iinclude -Iinclude -o src/hash.o src/hash.c
gcc -std=gnu99 -Wall -Werror=declaration-after-statement -pipe -g3 -O3 -funroll-loops -c -D_REENTRANT -Iinclude -Iinclude -o src/huge.o src/huge.c
gcc -std=gnu99 -Wall -Werror=declaration-after-statement -pipe -g3 -O3 -funroll-loops -c -D_REENTRANT -Iinclude -Iinclude -o src/mb.o src/mb.c
gcc -std=gnu99 -Wall -Werror=declaration-after-statement -pipe -g3 -O3 -funroll-loops -c -D_REENTRANT -Iinclude -Iinclude -o src/mutex.o src/mutex.c
src/mutex.c: In function ‘pthread_create_once’:
src/mutex.c:41:30: error: ‘RTLD_NEXT’ undeclared (first use in this function)
pthread_create_fptr = dlsym(RTLD_NEXT, ""pthread_create"");
^
src/mutex.c:41:30: note: each undeclared identifier is reported only once for each function it appears in
Makefile:243: recipe for target 'src/mutex.o' failed
make: *** [src/mutex.o] Error 1
sunyc_000@vivien-surface ~/src/fluffos/src/thirdparty/jemalloc
$ vi src/mutex.c
sunyc_000@vivien-surface ~/src/fluffos/src/thirdparty/jemalloc
$ make
gcc -std=gnu99 -Wall -Werror=declaration-after-statement -pipe -g3 -O3 -funroll-loops -c -D_REENTRANT -Iinclude -Iinclude -o src/mutex.o src/mutex.c
gcc -std=gnu99 -Wall -Werror=declaration-after-statement -pipe -g3 -O3 -funroll-loops -c -D_REENTRANT -Iinclude -Iinclude -o src/pages.o src/pages.c
src/pages.c: In function ‘je_pages_purge’:
src/pages.c:161:6: error: #error ""No madvise(2) flag defined for purging unused dirty pages.""
# error ""No madvise(2) flag defined for purging unused dirty pages.""
^
src/pages.c:163:32: error: ‘JEMALLOC_MADV_PURGE’ undeclared (first use in this function)
int err = madvise(addr, size, JEMALLOC_MADV_PURGE);
^
src/pages.c:163:32: note: each undeclared identifier is reported only once for each function it appears in
src/pages.c:164:15: error: ‘JEMALLOC_MADV_ZEROS’ undeclared (first use in this function)
unzeroed = (!JEMALLOC_MADV_ZEROS || err != 0);
^
Makefile:243: recipe for target 'src/pages.o' failed
make: *** [src/pages.o] Error 1
```
",True,"Issues compiling under CYGWIN - There are several issues compiling under CYGWIN
```
$ uname -a
CYGWIN_NT-10.0 host 2.2.1(0.289/5/3) 2015-08-20 11:42 x86_64 Cygwin
```
Namely:
1. there are no `` under cygwin. need wrap it with `#ifndef __CYGWIN__`
2. issues with RTLD_NEXT usage, it is not supported under CYGWIN.
3. Complain about `error ""No madvise(2) flag defined for purging unused dirty pages.""`
Full Log:
```
$ ./configure
checking for xsltproc... false
checking for gcc... gcc
checking whether the C compiler works... yes
checking for C compiler default output file name... a.exe
checking for suffix of executables... .exe
checking whether we are cross compiling... no
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ISO C89... none needed
checking whether compiler supports -std=gnu99... yes
checking whether compiler supports -Wall... yes
checking whether compiler supports -Werror=declaration-after-statement... yes
checking whether compiler supports -pipe... yes
checking whether compiler supports -g3... yes
checking how to run the C preprocessor... gcc -E
checking for grep that handles long lines and -e... /usr/bin/grep
checking for egrep... /usr/bin/grep -E
checking for ANSI C header files... yes
checking for sys/types.h... yes
checking for sys/stat.h... yes
checking for stdlib.h... yes
checking for string.h... yes
checking for memory.h... yes
checking for strings.h... yes
checking for inttypes.h... yes
checking for stdint.h... yes
checking for unistd.h... yes
checking whether byte ordering is bigendian... no
checking size of void *... 8
checking size of int... 4
checking size of long... 8
checking size of intmax_t... 8
checking build system type... x86_64-unknown-cygwin
checking host system type... x86_64-unknown-cygwin
checking whether pause instruction is compilable... yes
checking for ar... ar
checking malloc.h usability... yes
checking malloc.h presence... yes
checking for malloc.h... yes
checking whether malloc_usable_size definition can use const argument... no
checking whether __attribute__ syntax is compilable... yes
checking whether compiler supports -Werror... yes
checking whether tls_model attribute is compilable... yes
checking whether compiler supports -Werror... yes
checking whether alloc_size attribute is compilable... yes
checking whether compiler supports -Werror... yes
checking whether format(gnu_printf, ...) attribute is compilable... yes
checking whether compiler supports -Werror... yes
checking whether format(printf, ...) attribute is compilable... yes
checking for a BSD-compatible install... /usr/bin/install -c
checking for ranlib... ranlib
checking for ld... /usr/bin/ld
checking for autoconf... /usr/bin/autoconf
checking for memalign... yes
checking for valloc... yes
checking whether compiler supports -O3... yes
checking whether compiler supports -funroll-loops... yes
checking configured backtracing method... N/A
checking for sbrk... yes
checking whether utrace(2) is compilable... no
checking whether valgrind is compilable... no
checking whether a program using __builtin_ffsl is compilable... yes
checking LG_PAGE... 16
Missing VERSION file, and unable to generate it; creating bogus VERSION
checking for library containing clock_gettime... none required
checking for secure_getenv... no
checking for issetugid... yes
checking for _malloc_thread_cleanup... no
checking for _pthread_mutex_init_calloc_cb... no
Forcing lazy-lock to avoid allocator/threading bootstrap issues
Forcing no TLS to avoid allocator/threading bootstrap issues
checking whether C11 atomics is compilable... no
checking whether atomic(9) is compilable... no
checking whether Darwin OSAtomic*() is compilable... no
checking whether madvise(2) is compilable... yes
checking whether to force 32-bit __sync_{add,sub}_and_fetch()... no
checking whether to force 64-bit __sync_{add,sub}_and_fetch()... no
checking for __builtin_clz... yes
checking whether Darwin OSSpin*() is compilable... no
checking whether glibc malloc hook is compilable... no
checking whether glibc memalign hook is compilable... no
checking whether pthreads adaptive mutexes is compilable... no
checking for stdbool.h that conforms to C99... yes
checking for _Bool... yes
configure: creating ./config.status
config.status: creating Makefile
config.status: creating jemalloc.pc
config.status: creating doc/html.xsl
config.status: creating doc/manpages.xsl
config.status: creating doc/jemalloc.xml
config.status: creating include/jemalloc/jemalloc_macros.h
config.status: creating include/jemalloc/jemalloc_protos.h
config.status: creating include/jemalloc/jemalloc_typedefs.h
config.status: creating include/jemalloc/internal/jemalloc_internal.h
config.status: creating test/test.sh
config.status: creating test/include/test/jemalloc_test.h
config.status: creating config.stamp
config.status: creating bin/jemalloc-config
config.status: creating bin/jemalloc.sh
config.status: creating bin/jeprof
config.status: creating include/jemalloc/jemalloc_defs.h
config.status: creating include/jemalloc/internal/jemalloc_internal_defs.h
config.status: creating test/include/test/jemalloc_test_defs.h
config.status: test/include/test/jemalloc_test_defs.h is unchanged
config.status: executing include/jemalloc/internal/private_namespace.h commands
config.status: executing include/jemalloc/internal/private_unnamespace.h commands
config.status: executing include/jemalloc/internal/public_symbols.txt commands
config.status: executing include/jemalloc/internal/public_namespace.h commands
config.status: executing include/jemalloc/internal/public_unnamespace.h commands
config.status: executing include/jemalloc/internal/size_classes.h commands
config.status: executing include/jemalloc/jemalloc_protos_jet.h commands
config.status: executing include/jemalloc/jemalloc_rename.h commands
config.status: executing include/jemalloc/jemalloc_mangle.h commands
config.status: executing include/jemalloc/jemalloc_mangle_jet.h commands
config.status: executing include/jemalloc/jemalloc.h commands
===============================================================================
jemalloc version : 0.0.0-0-g0000000000000000000000000000000000000000
library revision : 2
CONFIG :
CC : gcc
CFLAGS : -std=gnu99 -Wall -Werror=declaration-after-statement -pipe -g3 -O3 -funroll-loops
CPPFLAGS : -D_REENTRANT
LDFLAGS :
EXTRA_LDFLAGS :
LIBS :
TESTLIBS :
RPATH_EXTRA :
XSLTPROC : false
XSLROOT :
PREFIX : /usr/local
BINDIR : /usr/local/bin
DATADIR : /usr/local/share
INCLUDEDIR : /usr/local/include
LIBDIR : /usr/local/lib
MANDIR : /usr/local/share/man
srcroot :
abs_srcroot : /home/sunyc_000/src/fluffos/src/thirdparty/jemalloc/
objroot :
abs_objroot : /home/sunyc_000/src/fluffos/src/thirdparty/jemalloc/
JEMALLOC_PREFIX : je_
JEMALLOC_PRIVATE_NAMESPACE
: je_
install_suffix :
autogen : 0
cc-silence : 1
debug : 0
code-coverage : 0
stats : 1
prof : 0
prof-libunwind : 0
prof-libgcc : 0
prof-gcc : 0
tcache : 1
fill : 1
utrace : 0
valgrind : 0
xmalloc : 0
munmap : 1
lazy_lock : 1
tls : 0
cache-oblivious : 1
===============================================================================
sunyc_000@vivien-surface ~/src/fluffos/src/thirdparty/jemalloc
$ make
gcc -std=gnu99 -Wall -Werror=declaration-after-statement -pipe -g3 -O3 -funroll-loops -c -D_REENTRANT -Iinclude -Iinclude -o src/jemalloc.o src/jemalloc.c
In file included from include/jemalloc/internal/jemalloc_internal.h:5:0,
from src/jemalloc.c:2:
include/jemalloc/internal/jemalloc_internal_decls.h:13:29: fatal error: sys/syscall.h: No such file or directory
# include
^
compilation terminated.
Makefile:243: recipe for target 'src/jemalloc.o' failed
make: *** [src/jemalloc.o] Error 1
sunyc_000@vivien-surface ~/src/fluffos/src/thirdparty/jemalloc
$ vi include/jemalloc/internal/jemalloc_internal_decls.h
sunyc_000@vivien-surface ~/src/fluffos/src/thirdparty/jemalloc
$ make
gcc -std=gnu99 -Wall -Werror=declaration-after-statement -pipe -g3 -O3 -funroll-loops -c -D_REENTRANT -Iinclude -Iinclude -o src/jemalloc.o src/jemalloc.c
gcc -std=gnu99 -Wall -Werror=declaration-after-statement -pipe -g3 -O3 -funroll-loops -c -D_REENTRANT -Iinclude -Iinclude -o src/arena.o src/arena.c
gcc -std=gnu99 -Wall -Werror=declaration-after-statement -pipe -g3 -O3 -funroll-loops -c -D_REENTRANT -Iinclude -Iinclude -o src/atomic.o src/atomic.c
gcc -std=gnu99 -Wall -Werror=declaration-after-statement -pipe -g3 -O3 -funroll-loops -c -D_REENTRANT -Iinclude -Iinclude -o src/base.o src/base.c
gcc -std=gnu99 -Wall -Werror=declaration-after-statement -pipe -g3 -O3 -funroll-loops -c -D_REENTRANT -Iinclude -Iinclude -o src/bitmap.o src/bitmap.c
gcc -std=gnu99 -Wall -Werror=declaration-after-statement -pipe -g3 -O3 -funroll-loops -c -D_REENTRANT -Iinclude -Iinclude -o src/chunk.o src/chunk.c
gcc -std=gnu99 -Wall -Werror=declaration-after-statement -pipe -g3 -O3 -funroll-loops -c -D_REENTRANT -Iinclude -Iinclude -o src/chunk_dss.o src/chunk_dss.c
gcc -std=gnu99 -Wall -Werror=declaration-after-statement -pipe -g3 -O3 -funroll-loops -c -D_REENTRANT -Iinclude -Iinclude -o src/chunk_mmap.o src/chunk_mmap.c
gcc -std=gnu99 -Wall -Werror=declaration-after-statement -pipe -g3 -O3 -funroll-loops -c -D_REENTRANT -Iinclude -Iinclude -o src/ckh.o src/ckh.c
gcc -std=gnu99 -Wall -Werror=declaration-after-statement -pipe -g3 -O3 -funroll-loops -c -D_REENTRANT -Iinclude -Iinclude -o src/ctl.o src/ctl.c
gcc -std=gnu99 -Wall -Werror=declaration-after-statement -pipe -g3 -O3 -funroll-loops -c -D_REENTRANT -Iinclude -Iinclude -o src/extent.o src/extent.c
gcc -std=gnu99 -Wall -Werror=declaration-after-statement -pipe -g3 -O3 -funroll-loops -c -D_REENTRANT -Iinclude -Iinclude -o src/hash.o src/hash.c
gcc -std=gnu99 -Wall -Werror=declaration-after-statement -pipe -g3 -O3 -funroll-loops -c -D_REENTRANT -Iinclude -Iinclude -o src/huge.o src/huge.c
gcc -std=gnu99 -Wall -Werror=declaration-after-statement -pipe -g3 -O3 -funroll-loops -c -D_REENTRANT -Iinclude -Iinclude -o src/mb.o src/mb.c
gcc -std=gnu99 -Wall -Werror=declaration-after-statement -pipe -g3 -O3 -funroll-loops -c -D_REENTRANT -Iinclude -Iinclude -o src/mutex.o src/mutex.c
src/mutex.c: In function ‘pthread_create_once’:
src/mutex.c:41:30: error: ‘RTLD_NEXT’ undeclared (first use in this function)
pthread_create_fptr = dlsym(RTLD_NEXT, ""pthread_create"");
^
src/mutex.c:41:30: note: each undeclared identifier is reported only once for each function it appears in
Makefile:243: recipe for target 'src/mutex.o' failed
make: *** [src/mutex.o] Error 1
sunyc_000@vivien-surface ~/src/fluffos/src/thirdparty/jemalloc
$ vi src/mutex.c
sunyc_000@vivien-surface ~/src/fluffos/src/thirdparty/jemalloc
$ make
gcc -std=gnu99 -Wall -Werror=declaration-after-statement -pipe -g3 -O3 -funroll-loops -c -D_REENTRANT -Iinclude -Iinclude -o src/mutex.o src/mutex.c
gcc -std=gnu99 -Wall -Werror=declaration-after-statement -pipe -g3 -O3 -funroll-loops -c -D_REENTRANT -Iinclude -Iinclude -o src/pages.o src/pages.c
src/pages.c: In function ‘je_pages_purge’:
src/pages.c:161:6: error: #error ""No madvise(2) flag defined for purging unused dirty pages.""
# error ""No madvise(2) flag defined for purging unused dirty pages.""
^
src/pages.c:163:32: error: ‘JEMALLOC_MADV_PURGE’ undeclared (first use in this function)
int err = madvise(addr, size, JEMALLOC_MADV_PURGE);
^
src/pages.c:163:32: note: each undeclared identifier is reported only once for each function it appears in
src/pages.c:164:15: error: ‘JEMALLOC_MADV_ZEROS’ undeclared (first use in this function)
unzeroed = (!JEMALLOC_MADV_ZEROS || err != 0);
^
Makefile:243: recipe for target 'src/pages.o' failed
make: *** [src/pages.o] Error 1
```
",1,issues compiling under cygwin there are several issues compiling under cygwin uname a cygwin nt host cygwin namely there are no under cygwin need wrap it with ifndef cygwin issues with rtld next usage it is not supported under cygwin complain about error no madvise flag defined for purging unused dirty pages full log configure checking for xsltproc false checking for gcc gcc checking whether the c compiler works yes checking for c compiler default output file name a exe checking for suffix of executables exe checking whether we are cross compiling no checking for suffix of object files o checking whether we are using the gnu c compiler yes checking whether gcc accepts g yes checking for gcc option to accept iso none needed checking whether compiler supports std yes checking whether compiler supports wall yes checking whether compiler supports werror declaration after statement yes checking whether compiler supports pipe yes checking whether compiler supports yes checking how to run the c preprocessor gcc e checking for grep that handles long lines and e usr bin grep checking for egrep usr bin grep e checking for ansi c header files yes checking for sys types h yes checking for sys stat h yes checking for stdlib h yes checking for string h yes checking for memory h yes checking for strings h yes checking for inttypes h yes checking for stdint h yes checking for unistd h yes checking whether byte ordering is bigendian no checking size of void checking size of int checking size of long checking size of intmax t checking build system type unknown cygwin checking host system type unknown cygwin checking whether pause instruction is compilable yes checking for ar ar checking malloc h usability yes checking malloc h presence yes checking for malloc h yes checking whether malloc usable size definition can use const argument no checking whether attribute syntax is compilable yes checking whether compiler supports werror yes checking whether tls model attribute is compilable yes checking whether compiler supports werror yes checking whether alloc size attribute is compilable yes checking whether compiler supports werror yes checking whether format gnu printf attribute is compilable yes checking whether compiler supports werror yes checking whether format printf attribute is compilable yes checking for a bsd compatible install usr bin install c checking for ranlib ranlib checking for ld usr bin ld checking for autoconf usr bin autoconf checking for memalign yes checking for valloc yes checking whether compiler supports yes checking whether compiler supports funroll loops yes checking configured backtracing method n a checking for sbrk yes checking whether utrace is compilable no checking whether valgrind is compilable no checking whether a program using builtin ffsl is compilable yes checking lg page missing version file and unable to generate it creating bogus version checking for library containing clock gettime none required checking for secure getenv no checking for issetugid yes checking for malloc thread cleanup no checking for pthread mutex init calloc cb no forcing lazy lock to avoid allocator threading bootstrap issues forcing no tls to avoid allocator threading bootstrap issues checking whether atomics is compilable no checking whether atomic is compilable no checking whether darwin osatomic is compilable no checking whether madvise is compilable yes checking whether to force bit sync add sub and fetch no checking whether to force bit sync add sub and fetch no checking for builtin clz yes checking whether darwin osspin is compilable no checking whether glibc malloc hook is compilable no checking whether glibc memalign hook is compilable no checking whether pthreads adaptive mutexes is compilable no checking for stdbool h that conforms to yes checking for bool yes configure creating config status config status creating makefile config status creating jemalloc pc config status creating doc html xsl config status creating doc manpages xsl config status creating doc jemalloc xml config status creating include jemalloc jemalloc macros h config status creating include jemalloc jemalloc protos h config status creating include jemalloc jemalloc typedefs h config status creating include jemalloc internal jemalloc internal h config status creating test test sh config status creating test include test jemalloc test h config status creating config stamp config status creating bin jemalloc config config status creating bin jemalloc sh config status creating bin jeprof config status creating include jemalloc jemalloc defs h config status creating include jemalloc internal jemalloc internal defs h config status creating test include test jemalloc test defs h config status test include test jemalloc test defs h is unchanged config status executing include jemalloc internal private namespace h commands config status executing include jemalloc internal private unnamespace h commands config status executing include jemalloc internal public symbols txt commands config status executing include jemalloc internal public namespace h commands config status executing include jemalloc internal public unnamespace h commands config status executing include jemalloc internal size classes h commands config status executing include jemalloc jemalloc protos jet h commands config status executing include jemalloc jemalloc rename h commands config status executing include jemalloc jemalloc mangle h commands config status executing include jemalloc jemalloc mangle jet h commands config status executing include jemalloc jemalloc h commands jemalloc version library revision config cc gcc cflags std wall werror declaration after statement pipe funroll loops cppflags d reentrant ldflags extra ldflags libs testlibs rpath extra xsltproc false xslroot prefix usr local bindir usr local bin datadir usr local share includedir usr local include libdir usr local lib mandir usr local share man srcroot abs srcroot home sunyc src fluffos src thirdparty jemalloc objroot abs objroot home sunyc src fluffos src thirdparty jemalloc jemalloc prefix je jemalloc private namespace je install suffix autogen cc silence debug code coverage stats prof prof libunwind prof libgcc prof gcc tcache fill utrace valgrind xmalloc munmap lazy lock tls cache oblivious sunyc vivien surface src fluffos src thirdparty jemalloc make gcc std wall werror declaration after statement pipe funroll loops c d reentrant iinclude iinclude o src jemalloc o src jemalloc c in file included from include jemalloc internal jemalloc internal h from src jemalloc c include jemalloc internal jemalloc internal decls h fatal error sys syscall h no such file or directory include compilation terminated makefile recipe for target src jemalloc o failed make error sunyc vivien surface src fluffos src thirdparty jemalloc vi include jemalloc internal jemalloc internal decls h sunyc vivien surface src fluffos src thirdparty jemalloc make gcc std wall werror declaration after statement pipe funroll loops c d reentrant iinclude iinclude o src jemalloc o src jemalloc c gcc std wall werror declaration after statement pipe funroll loops c d reentrant iinclude iinclude o src arena o src arena c gcc std wall werror declaration after statement pipe funroll loops c d reentrant iinclude iinclude o src atomic o src atomic c gcc std wall werror declaration after statement pipe funroll loops c d reentrant iinclude iinclude o src base o src base c gcc std wall werror declaration after statement pipe funroll loops c d reentrant iinclude iinclude o src bitmap o src bitmap c gcc std wall werror declaration after statement pipe funroll loops c d reentrant iinclude iinclude o src chunk o src chunk c gcc std wall werror declaration after statement pipe funroll loops c d reentrant iinclude iinclude o src chunk dss o src chunk dss c gcc std wall werror declaration after statement pipe funroll loops c d reentrant iinclude iinclude o src chunk mmap o src chunk mmap c gcc std wall werror declaration after statement pipe funroll loops c d reentrant iinclude iinclude o src ckh o src ckh c gcc std wall werror declaration after statement pipe funroll loops c d reentrant iinclude iinclude o src ctl o src ctl c gcc std wall werror declaration after statement pipe funroll loops c d reentrant iinclude iinclude o src extent o src extent c gcc std wall werror declaration after statement pipe funroll loops c d reentrant iinclude iinclude o src hash o src hash c gcc std wall werror declaration after statement pipe funroll loops c d reentrant iinclude iinclude o src huge o src huge c gcc std wall werror declaration after statement pipe funroll loops c d reentrant iinclude iinclude o src mb o src mb c gcc std wall werror declaration after statement pipe funroll loops c d reentrant iinclude iinclude o src mutex o src mutex c src mutex c in function ‘pthread create once’ src mutex c error ‘rtld next’ undeclared first use in this function pthread create fptr dlsym rtld next pthread create src mutex c note each undeclared identifier is reported only once for each function it appears in makefile recipe for target src mutex o failed make error sunyc vivien surface src fluffos src thirdparty jemalloc vi src mutex c sunyc vivien surface src fluffos src thirdparty jemalloc make gcc std wall werror declaration after statement pipe funroll loops c d reentrant iinclude iinclude o src mutex o src mutex c gcc std wall werror declaration after statement pipe funroll loops c d reentrant iinclude iinclude o src pages o src pages c src pages c in function ‘je pages purge’ src pages c error error no madvise flag defined for purging unused dirty pages error no madvise flag defined for purging unused dirty pages src pages c error ‘jemalloc madv purge’ undeclared first use in this function int err madvise addr size jemalloc madv purge src pages c note each undeclared identifier is reported only once for each function it appears in src pages c error ‘jemalloc madv zeros’ undeclared first use in this function unzeroed jemalloc madv zeros err makefile recipe for target src pages o failed make error ,1
78102,14948570190.0,IssuesEvent,2021-01-26 10:15:47,intellij-rust/intellij-rust,https://api.github.com/repos/intellij-rust/intellij-rust,closed,Create function intention should create async function if necessary,bug subsystem::code insight,"
## Environment
* **IntelliJ Rust plugin version:** 0.3.139.3615-203
* **Rust toolchain version:** 1.51.0-nightly (4253153db 2021-01-17) x86_64-unknown-linux-gnu
* **IDE name and version:** CLion 2020.3.1 (CL-203.6682.181)
* **Operating system:** Linux 5.10.9-arch1-1
* **Macro expansion engine:** new
* **Name resolution engine:** old
## Problem description
""Create function intention"" should create async function if we call `.await` directly on function call result:

## Steps to reproduce
```rust
async fn foo() {
/*cursor*/bar().await;
}
```",1.0,"Create function intention should create async function if necessary -
## Environment
* **IntelliJ Rust plugin version:** 0.3.139.3615-203
* **Rust toolchain version:** 1.51.0-nightly (4253153db 2021-01-17) x86_64-unknown-linux-gnu
* **IDE name and version:** CLion 2020.3.1 (CL-203.6682.181)
* **Operating system:** Linux 5.10.9-arch1-1
* **Macro expansion engine:** new
* **Name resolution engine:** old
## Problem description
""Create function intention"" should create async function if we call `.await` directly on function call result:

## Steps to reproduce
```rust
async fn foo() {
/*cursor*/bar().await;
}
```",0,create function intention should create async function if necessary hello and thank you for the issue if you would like to report a bug we have added some points below that you can fill out feel free to remove all the irrelevant text to request a new feature environment intellij rust plugin version rust toolchain version nightly unknown linux gnu ide name and version clion cl operating system linux macro expansion engine new name resolution engine old problem description create function intention should create async function if we call await directly on function call result steps to reproduce rust async fn foo cursor bar await ,0
417419,12159884022.0,IssuesEvent,2020-04-26 11:01:45,zulip/zulip,https://api.github.com/repos/zulip/zulip,closed,Remove DefaultStream objects associated with a stream on deactivation,area: stream settings bug help wanted in progress priority: high,"Apparently, we have a bug where if you deactivate a stream that's marked as a default stream, we don't call `do_remove_default_stream` on it, resulting in these being leaked, which can be confusing.
We should fix this for the upcoming DefaultStreamGroup feature at the same time we fix this.
`do_deactivate_stream` is the code path that needs to handle this.
",1.0,"Remove DefaultStream objects associated with a stream on deactivation - Apparently, we have a bug where if you deactivate a stream that's marked as a default stream, we don't call `do_remove_default_stream` on it, resulting in these being leaked, which can be confusing.
We should fix this for the upcoming DefaultStreamGroup feature at the same time we fix this.
`do_deactivate_stream` is the code path that needs to handle this.
",0,remove defaultstream objects associated with a stream on deactivation apparently we have a bug where if you deactivate a stream that s marked as a default stream we don t call do remove default stream on it resulting in these being leaked which can be confusing we should fix this for the upcoming defaultstreamgroup feature at the same time we fix this do deactivate stream is the code path that needs to handle this ,0
1532,22157266533.0,IssuesEvent,2022-06-04 01:49:20,apache/beam,https://api.github.com/repos/apache/beam,opened,Spark portable runner: support SDF,new feature P3 runner-spark portability-spark,"
Imported from Jira [BEAM-7222](https://issues.apache.org/jira/browse/BEAM-7222). Original Jira may contain additional context.
Reported by: ibzib.",True,"Spark portable runner: support SDF -
Imported from Jira [BEAM-7222](https://issues.apache.org/jira/browse/BEAM-7222). Original Jira may contain additional context.
Reported by: ibzib.",1,spark portable runner support sdf imported from jira original jira may contain additional context reported by ibzib ,1
1259,16718129728.0,IssuesEvent,2021-06-10 01:37:19,nick-nuti/Backup-Program,https://api.github.com/repos/nick-nuti/Backup-Program,opened,Use Merkle Trees for each backup digest,portability,"Represent the backup digest as a Merkle Tree.
https://en.wikipedia.org/wiki/Merkle_tree
Merkle trees will allow us to effectively determine differences between digests. From this, we may easily determine if any files have been modified, which would require modified files to be synced to the backend storage.
Considerations:
1) Is it important to retain the original state of a file even after it has be modified?
* e.g.: Backup1.json from Downloads. User modifies a text file inside Downloads. Backup2.json includes the updated text file with modified contents. **Should we keep the original text file as a means of allowing the user to ""step backwards in time""?** This would mean holding on to multiple copies of the same source content.
2) Does each successive backup digest include a full Merkle Tree OR just the diff of the previous backup and the current state?
* Full Merkle tree would be memory heavy on the digest side of things.
* Partial Merkle tree would require some additional work to reconstruct the full tree when a user asks for a specific backup.
* Memory vs Time?
3) What hashing algorithm are we using for nodes? SHA256? Consider portability.
REF #13 ",True,"Use Merkle Trees for each backup digest - Represent the backup digest as a Merkle Tree.
https://en.wikipedia.org/wiki/Merkle_tree
Merkle trees will allow us to effectively determine differences between digests. From this, we may easily determine if any files have been modified, which would require modified files to be synced to the backend storage.
Considerations:
1) Is it important to retain the original state of a file even after it has be modified?
* e.g.: Backup1.json from Downloads. User modifies a text file inside Downloads. Backup2.json includes the updated text file with modified contents. **Should we keep the original text file as a means of allowing the user to ""step backwards in time""?** This would mean holding on to multiple copies of the same source content.
2) Does each successive backup digest include a full Merkle Tree OR just the diff of the previous backup and the current state?
* Full Merkle tree would be memory heavy on the digest side of things.
* Partial Merkle tree would require some additional work to reconstruct the full tree when a user asks for a specific backup.
* Memory vs Time?
3) What hashing algorithm are we using for nodes? SHA256? Consider portability.
REF #13 ",1,use merkle trees for each backup digest represent the backup digest as a merkle tree merkle trees will allow us to effectively determine differences between digests from this we may easily determine if any files have been modified which would require modified files to be synced to the backend storage considerations is it important to retain the original state of a file even after it has be modified e g json from downloads user modifies a text file inside downloads json includes the updated text file with modified contents should we keep the original text file as a means of allowing the user to step backwards in time this would mean holding on to multiple copies of the same source content does each successive backup digest include a full merkle tree or just the diff of the previous backup and the current state full merkle tree would be memory heavy on the digest side of things partial merkle tree would require some additional work to reconstruct the full tree when a user asks for a specific backup memory vs time what hashing algorithm are we using for nodes consider portability ref ,1
1331,18686086301.0,IssuesEvent,2021-11-01 12:37:31,openwall/john,https://api.github.com/repos/openwall/john,closed,ERROR: make -sj4 (creating secp256k1.a / Dynamic / Dynamic_func) install-windows file,portability,"**Story:**
Followed the INSTALL-WINDOWS file.
Had to install a couple more packages like Make and GCC (C compiler) that were not presented in the INSTALL-WINDOWS file.
Tried Cygwinx86 and Cygwinx86_64 build, both threw the same error on package "" Secp256k1.a "" after command:
`_./configure && make -s clean && make -sj4_ .`
**System:** Windows 10 Pro (Version: 21H1)
**Architecture:** x64
**Steps:**
1. Installed latest Cygwinx86_64 and Cygwinx86 build
2. Downloaded latest john-bleeding-jumbo
3. Installed required packages
_Packages mentioned in INSTALL-WINDOWS file:_
libssl-devel , libbz2-devel , libgmp-devel , zlib-devel , libOpenCL-devel, libcrypt-devel
_Packages I installed manually after running into John errors:_
make , gcc-core
4. Did everything with CMD / Cygwin terminal with admin privileges
**The error:**
```
ar: creating aes.a **successful**
ar: creating poly1305-donna.a **successful**
ar: creating ed25519-donna.a **successful**
ar: creating secp256k1.a **error**
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_fmt.o: in function crypt_all':
/cygdrive/c/Users/Username/Downloads/john-bleeding-jumbo/src/dynamic_fmt.c:1701: undefined reference to DynamicFunc__KECCAK_224_crypt_input2_overwrite_input1'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: /cygdrive/c/Users/Username/Downloads/john-bleeding-jumbo/src/dynamic_fmt.c:1703: undefined reference to DynamicFunc__KECCAK_384_crypt_input2_overwrite_input1'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: /cygdrive/c/Users/Username/Downloads/john-bleeding-jumbo/src/dynamic_fmt.c:1758: undefined reference to DynamicFunc__KECCAK_224_crypt_input2_append_input1'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: /cygdrive/c/Users/Username/Downloads/john-bleeding-jumbo/src/dynamic_fmt.c:1760: undefined reference to DynamicFunc__KECCAK_384_crypt_input2_append_input1'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_fmt.o: in function isKECCAKFunc':
/cygdrive/c/Users/Username/Downloads/john-bleeding-jumbo/src/dynamic_fmt.c:7315: undefined reference to DynamicFunc__KECCAK_224_crypt_input1_append_input2'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: /cygdrive/c/Users/Username/Downloads/john-bleeding-jumbo/src/dynamic_fmt.c:7315: undefined reference to DynamicFunc__KECCAK_224_crypt_input2_append_input1'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: /cygdrive/c/Users/Username/Downloads/john-bleeding-jumbo/src/dynamic_fmt.c:7315: undefined reference to DynamicFunc__KECCAK_224_crypt_input1_overwrite_input1'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: /cygdrive/c/Users/Username/Downloads/john-bleeding-jumbo/src/dynamic_fmt.c:7315: undefined reference to DynamicFunc__KECCAK_224_crypt_input2_overwrite_input2'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: /cygdrive/c/Users/Username/Downloads/john-bleeding-jumbo/src/dynamic_fmt.c:7315: undefined reference to DynamicFunc__KECCAK_224_crypt_input1_overwrite_input2'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: /cygdrive/c/Users/Username/Downloads/john-bleeding-jumbo/src/dynamic_fmt.c:7315: undefined reference to DynamicFunc__KECCAK_224_crypt_input2_overwrite_input1'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: /cygdrive/c/Users/Username/Downloads/john-bleeding-jumbo/src/dynamic_fmt.c:7315: undefined reference to DynamicFunc__KECCAK_224_crypt_input1_to_output1_FINAL'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: /cygdrive/c/Users/Username/Downloads/john-bleeding-jumbo/src/dynamic_fmt.c:7315: undefined reference to DynamicFunc__KECCAK_224_crypt_input2_to_output1_FINAL'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: /cygdrive/c/Users/Username/Downloads/john-bleeding-jumbo/src/dynamic_fmt.c:7315: undefined reference to DynamicFunc__KECCAK_384_crypt_input1_append_input2'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: /cygdrive/c/Users/Username/Downloads/john-bleeding-jumbo/src/dynamic_fmt.c:7315: undefined reference to DynamicFunc__KECCAK_384_crypt_input2_append_input1'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: /cygdrive/c/Users/Username/Downloads/john-bleeding-jumbo/src/dynamic_fmt.c:7315: undefined reference to DynamicFunc__KECCAK_384_crypt_input1_overwrite_input1'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: /cygdrive/c/Users/Username/Downloads/john-bleeding-jumbo/src/dynamic_fmt.c:7315: undefined reference to DynamicFunc__KECCAK_384_crypt_input2_overwrite_input2'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: /cygdrive/c/Users/Username/Downloads/john-bleeding-jumbo/src/dynamic_fmt.c:7315: undefined reference to DynamicFunc__KECCAK_384_crypt_input1_overwrite_input2'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: /cygdrive/c/Users/Username/Downloads/john-bleeding-jumbo/src/dynamic_fmt.c:7315: undefined reference to DynamicFunc__KECCAK_384_crypt_input2_overwrite_input1'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: /cygdrive/c/Users/Username/Downloads/john-bleeding-jumbo/src/dynamic_fmt.c:7315: undefined reference to DynamicFunc__KECCAK_384_crypt_input1_to_output1_FINAL'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: /cygdrive/c/Users/Username/Downloads/john-bleeding-jumbo/src/dynamic_fmt.c:7315: undefined reference to DynamicFunc__KECCAK_384_crypt_input2_to_output1_FINAL'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_fmt.o: in function isLargeHashFinalFunc':
/cygdrive/c/Users/Username/Downloads/john-bleeding-jumbo/src/dynamic_fmt.c:7331: undefined reference to DynamicFunc__KECCAK_224_crypt_input1_to_output1_FINAL'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: /cygdrive/c/Users/Username/Downloads/john-bleeding-jumbo/src/dynamic_fmt.c:7332: undefined reference to DynamicFunc__KECCAK_224_crypt_input2_to_output1_FINAL'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: /cygdrive/c/Users/Username/Downloads/john-bleeding-jumbo/src/dynamic_fmt.c:7332: undefined reference to DynamicFunc__KECCAK_384_crypt_input1_to_output1_FINAL'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: /cygdrive/c/Users/Username/Downloads/john-bleeding-jumbo/src/dynamic_fmt.c:7332: undefined reference to DynamicFunc__KECCAK_384_crypt_input2_to_output1_FINAL'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x101e4): undefined reference to DynamicFunc__KECCAK_224_crypt_input1_append_input2'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x101ec): undefined reference to DynamicFunc__KECCAK_224_crypt_input2_append_input1'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x101f4): undefined reference to DynamicFunc__KECCAK_224_crypt_input1_at_offset_input2'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x101fc): undefined reference to DynamicFunc__KECCAK_224_crypt_input2_at_offset_input1'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x10204): undefined reference to DynamicFunc__KECCAK_224_crypt_input1_at_offset_input1'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x1020c): undefined reference to DynamicFunc__KECCAK_224_crypt_input2_at_offset_input2'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x10214): undefined reference to DynamicFunc__KECCAK_224_crypt_input1_overwrite_input1'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x1021c): undefined reference to DynamicFunc__KECCAK_224_crypt_input2_overwrite_input2'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x10224): undefined reference to DynamicFunc__KECCAK_224_crypt_input1_overwrite_input2'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x1022c): undefined reference to DynamicFunc__KECCAK_224_crypt_input2_overwrite_input1'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x10234): undefined reference to DynamicFunc__KECCAK_224_crypt_input1_to_output1'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x1023c): undefined reference to DynamicFunc__KECCAK_224_crypt_input1_to_output2'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x10244): undefined reference to DynamicFunc__KECCAK_224_crypt_input1_to_output3'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x1024c): undefined reference to DynamicFunc__KECCAK_224_crypt_input1_to_output4'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x10254): undefined reference to DynamicFunc__KECCAK_224_crypt_input2_to_output1'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x1025c): undefined reference to DynamicFunc__KECCAK_224_crypt_input2_to_output2'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x10264): undefined reference to DynamicFunc__KECCAK_224_crypt_input2_to_output3'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x1026c): undefined reference to DynamicFunc__KECCAK_224_crypt_input2_to_output4'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x10274): undefined reference to DynamicFunc__KECCAK_224_crypt_input1_to_output1_FINAL'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x1027c): undefined reference to DynamicFunc__KECCAK_224_crypt_input2_to_output1_FINAL'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x10324): undefined reference to DynamicFunc__KECCAK_384_crypt_input1_append_input2'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x1032c): undefined reference to DynamicFunc__KECCAK_384_crypt_input2_append_input1'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x10334): undefined reference to DynamicFunc__KECCAK_384_crypt_input1_at_offset_input2'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x1033c): undefined reference to DynamicFunc__KECCAK_384_crypt_input2_at_offset_input1'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x10344): undefined reference to DynamicFunc__KECCAK_384_crypt_input1_at_offset_input1'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x1034c): undefined reference to DynamicFunc__KECCAK_384_crypt_input2_at_offset_input2'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x10354): undefined reference to DynamicFunc__KECCAK_384_crypt_input1_overwrite_input1'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x1035c): undefined reference to DynamicFunc__KECCAK_384_crypt_input2_overwrite_input2'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x10364): undefined reference to DynamicFunc__KECCAK_384_crypt_input1_overwrite_input2'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x1036c): undefined reference to DynamicFunc__KECCAK_384_crypt_input2_overwrite_input1'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x10374): undefined reference to DynamicFunc__KECCAK_384_crypt_input1_to_output1'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x1037c): undefined reference to DynamicFunc__KECCAK_384_crypt_input1_to_output2'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x10384): undefined reference to DynamicFunc__KECCAK_384_crypt_input1_to_output3'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x1038c): undefined reference to DynamicFunc__KECCAK_384_crypt_input1_to_output4'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x10394): undefined reference to DynamicFunc__KECCAK_384_crypt_input2_to_output1'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x1039c): undefined reference to DynamicFunc__KECCAK_384_crypt_input2_to_output2'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x103a4): undefined reference to DynamicFunc__KECCAK_384_crypt_input2_to_output3'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x103ac): undefined reference to DynamicFunc__KECCAK_384_crypt_input2_to_output4'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x103b4): undefined reference to DynamicFunc__KECCAK_384_crypt_input1_to_output1_FINAL'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x103bc): undefined reference to DynamicFunc__KECCAK_384_crypt_input2_to_output1_FINAL'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_preloads.o:dynamic_preloa:(.data+0x50a4): undefined reference to DynamicFunc__KECCAK_384_crypt_input1_append_input2'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_preloads.o:dynamic_preloa:(.data+0x50ac): undefined reference to DynamicFunc__KECCAK_384_crypt_input2_to_output1_FINAL'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_preloads.o:dynamic_preloa:(.data+0x50c0): undefined reference to DynamicFunc__KECCAK_384_crypt_input2_to_output1_FINAL'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_preloads.o:dynamic_preloa:(.data+0x50d4): undefined reference to DynamicFunc__KECCAK_384_crypt_input2_to_output1_FINAL'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_preloads.o:dynamic_preloa:(.data+0x50e4): undefined reference to DynamicFunc__KECCAK_384_crypt_input1_to_output1_FINAL'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_preloads.o:dynamic_preloa:(.data+0x50f0): undefined reference to DynamicFunc__KECCAK_384_crypt_input1_overwrite_input2'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_preloads.o:dynamic_preloa:(.data+0x50f4): undefined reference to DynamicFunc__KECCAK_384_crypt_input2_to_output1_FINAL'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_preloads.o:dynamic_preloa:(.data+0x50fc): undefined reference to DynamicFunc__KECCAK_384_crypt_input1_overwrite_input2'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_preloads.o:dynamic_preloa:(.data+0x5100): undefined reference to DynamicFunc__KECCAK_384_crypt_input2_to_output1_FINAL'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_preloads.o:dynamic_preloa:(.data+0x5114): undefined reference to DynamicFunc__KECCAK_384_crypt_input1_to_output1_FINAL'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_preloads.o:dynamic_preloa:(.data+0x5128): undefined reference to DynamicFunc__KECCAK_384_crypt_input1_to_output1_FINAL'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_preloads.o:dynamic_preloa:(.data+0x5130): undefined reference to DynamicFunc__KECCAK_384_crypt_input1_to_output1_FINAL'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_preloads.o:dynamic_preloa:(.data+0x5804): undefined reference to DynamicFunc__KECCAK_224_crypt_input1_append_input2'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_preloads.o:dynamic_preloa:(.data+0x580c): undefined reference to DynamicFunc__KECCAK_224_crypt_input2_to_output1_FINAL'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_preloads.o:dynamic_preloa:(.data+0x5820): undefined reference to DynamicFunc__KECCAK_224_crypt_input2_to_output1_FINAL'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_preloads.o:dynamic_preloa:(.data+0x5834): undefined reference to DynamicFunc__KECCAK_224_crypt_input2_to_output1_FINAL'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_preloads.o:dynamic_preloa:(.data+0x5844): undefined reference to DynamicFunc__KECCAK_224_crypt_input1_to_output1_FINAL'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_preloads.o:dynamic_preloa:(.data+0x5850): undefined reference to DynamicFunc__KECCAK_224_crypt_input1_overwrite_input2'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_preloads.o:dynamic_preloa:(.data+0x5854): undefined reference to DynamicFunc__KECCAK_224_crypt_input2_to_output1_FINAL'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_preloads.o:dynamic_preloa:(.data+0x585c): undefined reference to DynamicFunc__KECCAK_224_crypt_input1_overwrite_input2'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_preloads.o:dynamic_preloa:(.data+0x5860): undefined reference to DynamicFunc__KECCAK_224_crypt_input2_to_output1_FINAL'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_preloads.o:dynamic_preloa:(.data+0x5874): undefined reference to DynamicFunc__KECCAK_224_crypt_input1_to_output1_FINAL'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_preloads.o:dynamic_preloa:(.data+0x5888): undefined reference to DynamicFunc__KECCAK_224_crypt_input1_to_output1_FINAL'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_preloads.o:dynamic_preloa:(.data+0x5890): undefined reference to DynamicFunc__KECCAK_224_crypt_input1_to_output1_FINAL'
collect2: error: ld returned 1 exit status
make[1]: *** [Makefile:596: ../run/john.exe] Error 1
make: *** [Makefile:190: default] Error 2```
",True,"ERROR: make -sj4 (creating secp256k1.a / Dynamic / Dynamic_func) install-windows file - **Story:**
Followed the INSTALL-WINDOWS file.
Had to install a couple more packages like Make and GCC (C compiler) that were not presented in the INSTALL-WINDOWS file.
Tried Cygwinx86 and Cygwinx86_64 build, both threw the same error on package "" Secp256k1.a "" after command:
`_./configure && make -s clean && make -sj4_ .`
**System:** Windows 10 Pro (Version: 21H1)
**Architecture:** x64
**Steps:**
1. Installed latest Cygwinx86_64 and Cygwinx86 build
2. Downloaded latest john-bleeding-jumbo
3. Installed required packages
_Packages mentioned in INSTALL-WINDOWS file:_
libssl-devel , libbz2-devel , libgmp-devel , zlib-devel , libOpenCL-devel, libcrypt-devel
_Packages I installed manually after running into John errors:_
make , gcc-core
4. Did everything with CMD / Cygwin terminal with admin privileges
**The error:**
```
ar: creating aes.a **successful**
ar: creating poly1305-donna.a **successful**
ar: creating ed25519-donna.a **successful**
ar: creating secp256k1.a **error**
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_fmt.o: in function crypt_all':
/cygdrive/c/Users/Username/Downloads/john-bleeding-jumbo/src/dynamic_fmt.c:1701: undefined reference to DynamicFunc__KECCAK_224_crypt_input2_overwrite_input1'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: /cygdrive/c/Users/Username/Downloads/john-bleeding-jumbo/src/dynamic_fmt.c:1703: undefined reference to DynamicFunc__KECCAK_384_crypt_input2_overwrite_input1'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: /cygdrive/c/Users/Username/Downloads/john-bleeding-jumbo/src/dynamic_fmt.c:1758: undefined reference to DynamicFunc__KECCAK_224_crypt_input2_append_input1'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: /cygdrive/c/Users/Username/Downloads/john-bleeding-jumbo/src/dynamic_fmt.c:1760: undefined reference to DynamicFunc__KECCAK_384_crypt_input2_append_input1'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_fmt.o: in function isKECCAKFunc':
/cygdrive/c/Users/Username/Downloads/john-bleeding-jumbo/src/dynamic_fmt.c:7315: undefined reference to DynamicFunc__KECCAK_224_crypt_input1_append_input2'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: /cygdrive/c/Users/Username/Downloads/john-bleeding-jumbo/src/dynamic_fmt.c:7315: undefined reference to DynamicFunc__KECCAK_224_crypt_input2_append_input1'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: /cygdrive/c/Users/Username/Downloads/john-bleeding-jumbo/src/dynamic_fmt.c:7315: undefined reference to DynamicFunc__KECCAK_224_crypt_input1_overwrite_input1'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: /cygdrive/c/Users/Username/Downloads/john-bleeding-jumbo/src/dynamic_fmt.c:7315: undefined reference to DynamicFunc__KECCAK_224_crypt_input2_overwrite_input2'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: /cygdrive/c/Users/Username/Downloads/john-bleeding-jumbo/src/dynamic_fmt.c:7315: undefined reference to DynamicFunc__KECCAK_224_crypt_input1_overwrite_input2'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: /cygdrive/c/Users/Username/Downloads/john-bleeding-jumbo/src/dynamic_fmt.c:7315: undefined reference to DynamicFunc__KECCAK_224_crypt_input2_overwrite_input1'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: /cygdrive/c/Users/Username/Downloads/john-bleeding-jumbo/src/dynamic_fmt.c:7315: undefined reference to DynamicFunc__KECCAK_224_crypt_input1_to_output1_FINAL'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: /cygdrive/c/Users/Username/Downloads/john-bleeding-jumbo/src/dynamic_fmt.c:7315: undefined reference to DynamicFunc__KECCAK_224_crypt_input2_to_output1_FINAL'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: /cygdrive/c/Users/Username/Downloads/john-bleeding-jumbo/src/dynamic_fmt.c:7315: undefined reference to DynamicFunc__KECCAK_384_crypt_input1_append_input2'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: /cygdrive/c/Users/Username/Downloads/john-bleeding-jumbo/src/dynamic_fmt.c:7315: undefined reference to DynamicFunc__KECCAK_384_crypt_input2_append_input1'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: /cygdrive/c/Users/Username/Downloads/john-bleeding-jumbo/src/dynamic_fmt.c:7315: undefined reference to DynamicFunc__KECCAK_384_crypt_input1_overwrite_input1'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: /cygdrive/c/Users/Username/Downloads/john-bleeding-jumbo/src/dynamic_fmt.c:7315: undefined reference to DynamicFunc__KECCAK_384_crypt_input2_overwrite_input2'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: /cygdrive/c/Users/Username/Downloads/john-bleeding-jumbo/src/dynamic_fmt.c:7315: undefined reference to DynamicFunc__KECCAK_384_crypt_input1_overwrite_input2'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: /cygdrive/c/Users/Username/Downloads/john-bleeding-jumbo/src/dynamic_fmt.c:7315: undefined reference to DynamicFunc__KECCAK_384_crypt_input2_overwrite_input1'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: /cygdrive/c/Users/Username/Downloads/john-bleeding-jumbo/src/dynamic_fmt.c:7315: undefined reference to DynamicFunc__KECCAK_384_crypt_input1_to_output1_FINAL'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: /cygdrive/c/Users/Username/Downloads/john-bleeding-jumbo/src/dynamic_fmt.c:7315: undefined reference to DynamicFunc__KECCAK_384_crypt_input2_to_output1_FINAL'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_fmt.o: in function isLargeHashFinalFunc':
/cygdrive/c/Users/Username/Downloads/john-bleeding-jumbo/src/dynamic_fmt.c:7331: undefined reference to DynamicFunc__KECCAK_224_crypt_input1_to_output1_FINAL'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: /cygdrive/c/Users/Username/Downloads/john-bleeding-jumbo/src/dynamic_fmt.c:7332: undefined reference to DynamicFunc__KECCAK_224_crypt_input2_to_output1_FINAL'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: /cygdrive/c/Users/Username/Downloads/john-bleeding-jumbo/src/dynamic_fmt.c:7332: undefined reference to DynamicFunc__KECCAK_384_crypt_input1_to_output1_FINAL'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: /cygdrive/c/Users/Username/Downloads/john-bleeding-jumbo/src/dynamic_fmt.c:7332: undefined reference to DynamicFunc__KECCAK_384_crypt_input2_to_output1_FINAL'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x101e4): undefined reference to DynamicFunc__KECCAK_224_crypt_input1_append_input2'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x101ec): undefined reference to DynamicFunc__KECCAK_224_crypt_input2_append_input1'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x101f4): undefined reference to DynamicFunc__KECCAK_224_crypt_input1_at_offset_input2'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x101fc): undefined reference to DynamicFunc__KECCAK_224_crypt_input2_at_offset_input1'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x10204): undefined reference to DynamicFunc__KECCAK_224_crypt_input1_at_offset_input1'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x1020c): undefined reference to DynamicFunc__KECCAK_224_crypt_input2_at_offset_input2'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x10214): undefined reference to DynamicFunc__KECCAK_224_crypt_input1_overwrite_input1'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x1021c): undefined reference to DynamicFunc__KECCAK_224_crypt_input2_overwrite_input2'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x10224): undefined reference to DynamicFunc__KECCAK_224_crypt_input1_overwrite_input2'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x1022c): undefined reference to DynamicFunc__KECCAK_224_crypt_input2_overwrite_input1'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x10234): undefined reference to DynamicFunc__KECCAK_224_crypt_input1_to_output1'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x1023c): undefined reference to DynamicFunc__KECCAK_224_crypt_input1_to_output2'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x10244): undefined reference to DynamicFunc__KECCAK_224_crypt_input1_to_output3'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x1024c): undefined reference to DynamicFunc__KECCAK_224_crypt_input1_to_output4'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x10254): undefined reference to DynamicFunc__KECCAK_224_crypt_input2_to_output1'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x1025c): undefined reference to DynamicFunc__KECCAK_224_crypt_input2_to_output2'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x10264): undefined reference to DynamicFunc__KECCAK_224_crypt_input2_to_output3'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x1026c): undefined reference to DynamicFunc__KECCAK_224_crypt_input2_to_output4'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x10274): undefined reference to DynamicFunc__KECCAK_224_crypt_input1_to_output1_FINAL'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x1027c): undefined reference to DynamicFunc__KECCAK_224_crypt_input2_to_output1_FINAL'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x10324): undefined reference to DynamicFunc__KECCAK_384_crypt_input1_append_input2'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x1032c): undefined reference to DynamicFunc__KECCAK_384_crypt_input2_append_input1'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x10334): undefined reference to DynamicFunc__KECCAK_384_crypt_input1_at_offset_input2'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x1033c): undefined reference to DynamicFunc__KECCAK_384_crypt_input2_at_offset_input1'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x10344): undefined reference to DynamicFunc__KECCAK_384_crypt_input1_at_offset_input1'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x1034c): undefined reference to DynamicFunc__KECCAK_384_crypt_input2_at_offset_input2'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x10354): undefined reference to DynamicFunc__KECCAK_384_crypt_input1_overwrite_input1'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x1035c): undefined reference to DynamicFunc__KECCAK_384_crypt_input2_overwrite_input2'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x10364): undefined reference to DynamicFunc__KECCAK_384_crypt_input1_overwrite_input2'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x1036c): undefined reference to DynamicFunc__KECCAK_384_crypt_input2_overwrite_input1'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x10374): undefined reference to DynamicFunc__KECCAK_384_crypt_input1_to_output1'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x1037c): undefined reference to DynamicFunc__KECCAK_384_crypt_input1_to_output2'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x10384): undefined reference to DynamicFunc__KECCAK_384_crypt_input1_to_output3'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x1038c): undefined reference to DynamicFunc__KECCAK_384_crypt_input1_to_output4'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x10394): undefined reference to DynamicFunc__KECCAK_384_crypt_input2_to_output1'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x1039c): undefined reference to DynamicFunc__KECCAK_384_crypt_input2_to_output2'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x103a4): undefined reference to DynamicFunc__KECCAK_384_crypt_input2_to_output3'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x103ac): undefined reference to DynamicFunc__KECCAK_384_crypt_input2_to_output4'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x103b4): undefined reference to DynamicFunc__KECCAK_384_crypt_input1_to_output1_FINAL'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_parser.o:dynamic_parser:(.rdata+0x103bc): undefined reference to DynamicFunc__KECCAK_384_crypt_input2_to_output1_FINAL'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_preloads.o:dynamic_preloa:(.data+0x50a4): undefined reference to DynamicFunc__KECCAK_384_crypt_input1_append_input2'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_preloads.o:dynamic_preloa:(.data+0x50ac): undefined reference to DynamicFunc__KECCAK_384_crypt_input2_to_output1_FINAL'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_preloads.o:dynamic_preloa:(.data+0x50c0): undefined reference to DynamicFunc__KECCAK_384_crypt_input2_to_output1_FINAL'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_preloads.o:dynamic_preloa:(.data+0x50d4): undefined reference to DynamicFunc__KECCAK_384_crypt_input2_to_output1_FINAL'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_preloads.o:dynamic_preloa:(.data+0x50e4): undefined reference to DynamicFunc__KECCAK_384_crypt_input1_to_output1_FINAL'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_preloads.o:dynamic_preloa:(.data+0x50f0): undefined reference to DynamicFunc__KECCAK_384_crypt_input1_overwrite_input2'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_preloads.o:dynamic_preloa:(.data+0x50f4): undefined reference to DynamicFunc__KECCAK_384_crypt_input2_to_output1_FINAL'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_preloads.o:dynamic_preloa:(.data+0x50fc): undefined reference to DynamicFunc__KECCAK_384_crypt_input1_overwrite_input2'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_preloads.o:dynamic_preloa:(.data+0x5100): undefined reference to DynamicFunc__KECCAK_384_crypt_input2_to_output1_FINAL'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_preloads.o:dynamic_preloa:(.data+0x5114): undefined reference to DynamicFunc__KECCAK_384_crypt_input1_to_output1_FINAL'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_preloads.o:dynamic_preloa:(.data+0x5128): undefined reference to DynamicFunc__KECCAK_384_crypt_input1_to_output1_FINAL'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_preloads.o:dynamic_preloa:(.data+0x5130): undefined reference to DynamicFunc__KECCAK_384_crypt_input1_to_output1_FINAL'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_preloads.o:dynamic_preloa:(.data+0x5804): undefined reference to DynamicFunc__KECCAK_224_crypt_input1_append_input2'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_preloads.o:dynamic_preloa:(.data+0x580c): undefined reference to DynamicFunc__KECCAK_224_crypt_input2_to_output1_FINAL'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_preloads.o:dynamic_preloa:(.data+0x5820): undefined reference to DynamicFunc__KECCAK_224_crypt_input2_to_output1_FINAL'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_preloads.o:dynamic_preloa:(.data+0x5834): undefined reference to DynamicFunc__KECCAK_224_crypt_input2_to_output1_FINAL'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_preloads.o:dynamic_preloa:(.data+0x5844): undefined reference to DynamicFunc__KECCAK_224_crypt_input1_to_output1_FINAL'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_preloads.o:dynamic_preloa:(.data+0x5850): undefined reference to DynamicFunc__KECCAK_224_crypt_input1_overwrite_input2'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_preloads.o:dynamic_preloa:(.data+0x5854): undefined reference to DynamicFunc__KECCAK_224_crypt_input2_to_output1_FINAL'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_preloads.o:dynamic_preloa:(.data+0x585c): undefined reference to DynamicFunc__KECCAK_224_crypt_input1_overwrite_input2'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_preloads.o:dynamic_preloa:(.data+0x5860): undefined reference to DynamicFunc__KECCAK_224_crypt_input2_to_output1_FINAL'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_preloads.o:dynamic_preloa:(.data+0x5874): undefined reference to DynamicFunc__KECCAK_224_crypt_input1_to_output1_FINAL'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_preloads.o:dynamic_preloa:(.data+0x5888): undefined reference to DynamicFunc__KECCAK_224_crypt_input1_to_output1_FINAL'
/usr/lib/gcc/i686-pc-cygwin/11/../../../../i686-pc-cygwin/bin/ld: dynamic_preloads.o:dynamic_preloa:(.data+0x5890): undefined reference to DynamicFunc__KECCAK_224_crypt_input1_to_output1_FINAL'
collect2: error: ld returned 1 exit status
make[1]: *** [Makefile:596: ../run/john.exe] Error 1
make: *** [Makefile:190: default] Error 2```
",1,error make creating a dynamic dynamic func install windows file story followed the install windows file had to install a couple more packages like make and gcc c compiler that were not presented in the install windows file tried and build both threw the same error on package a after command configure make s clean make system windows pro version architecture steps installed latest and build downloaded latest john bleeding jumbo installed required packages packages mentioned in install windows file libssl devel devel libgmp devel zlib devel libopencl devel libcrypt devel packages i installed manually after running into john errors make gcc core did everything with cmd cygwin terminal with admin privileges the error ar creating aes a successful ar creating donna a successful ar creating donna a successful ar creating a error usr lib gcc pc cygwin pc cygwin bin ld dynamic fmt o in function crypt all cygdrive c users username downloads john bleeding jumbo src dynamic fmt c undefined reference to dynamicfunc keccak crypt overwrite usr lib gcc pc cygwin pc cygwin bin ld cygdrive c users username downloads john bleeding jumbo src dynamic fmt c undefined reference to dynamicfunc keccak crypt overwrite usr lib gcc pc cygwin pc cygwin bin ld cygdrive c users username downloads john bleeding jumbo src dynamic fmt c undefined reference to dynamicfunc keccak crypt append usr lib gcc pc cygwin pc cygwin bin ld cygdrive c users username downloads john bleeding jumbo src dynamic fmt c undefined reference to dynamicfunc keccak crypt append usr lib gcc pc cygwin pc cygwin bin ld dynamic fmt o in function iskeccakfunc cygdrive c users username downloads john bleeding jumbo src dynamic fmt c undefined reference to dynamicfunc keccak crypt append usr lib gcc pc cygwin pc cygwin bin ld cygdrive c users username downloads john bleeding jumbo src dynamic fmt c undefined reference to dynamicfunc keccak crypt append usr lib gcc pc cygwin pc cygwin bin ld cygdrive c users username downloads john bleeding jumbo src dynamic fmt c undefined reference to dynamicfunc keccak crypt overwrite usr lib gcc pc cygwin pc cygwin bin ld cygdrive c users username downloads john bleeding jumbo src dynamic fmt c undefined reference to dynamicfunc keccak crypt overwrite usr lib gcc pc cygwin pc cygwin bin ld cygdrive c users username downloads john bleeding jumbo src dynamic fmt c undefined reference to dynamicfunc keccak crypt overwrite usr lib gcc pc cygwin pc cygwin bin ld cygdrive c users username downloads john bleeding jumbo src dynamic fmt c undefined reference to dynamicfunc keccak crypt overwrite usr lib gcc pc cygwin pc cygwin bin ld cygdrive c users username downloads john bleeding jumbo src dynamic fmt c undefined reference to dynamicfunc keccak crypt to final usr lib gcc pc cygwin pc cygwin bin ld cygdrive c users username downloads john bleeding jumbo src dynamic fmt c undefined reference to dynamicfunc keccak crypt to final usr lib gcc pc cygwin pc cygwin bin ld cygdrive c users username downloads john bleeding jumbo src dynamic fmt c undefined reference to dynamicfunc keccak crypt append usr lib gcc pc cygwin pc cygwin bin ld cygdrive c users username downloads john bleeding jumbo src dynamic fmt c undefined reference to dynamicfunc keccak crypt append usr lib gcc pc cygwin pc cygwin bin ld cygdrive c users username downloads john bleeding jumbo src dynamic fmt c undefined reference to dynamicfunc keccak crypt overwrite usr lib gcc pc cygwin pc cygwin bin ld cygdrive c users username downloads john bleeding jumbo src dynamic fmt c undefined reference to dynamicfunc keccak crypt overwrite usr lib gcc pc cygwin pc cygwin bin ld cygdrive c users username downloads john bleeding jumbo src dynamic fmt c undefined reference to dynamicfunc keccak crypt overwrite usr lib gcc pc cygwin pc cygwin bin ld cygdrive c users username downloads john bleeding jumbo src dynamic fmt c undefined reference to dynamicfunc keccak crypt overwrite usr lib gcc pc cygwin pc cygwin bin ld cygdrive c users username downloads john bleeding jumbo src dynamic fmt c undefined reference to dynamicfunc keccak crypt to final usr lib gcc pc cygwin pc cygwin bin ld cygdrive c users username downloads john bleeding jumbo src dynamic fmt c undefined reference to dynamicfunc keccak crypt to final usr lib gcc pc cygwin pc cygwin bin ld dynamic fmt o in function islargehashfinalfunc cygdrive c users username downloads john bleeding jumbo src dynamic fmt c undefined reference to dynamicfunc keccak crypt to final usr lib gcc pc cygwin pc cygwin bin ld cygdrive c users username downloads john bleeding jumbo src dynamic fmt c undefined reference to dynamicfunc keccak crypt to final usr lib gcc pc cygwin pc cygwin bin ld cygdrive c users username downloads john bleeding jumbo src dynamic fmt c undefined reference to dynamicfunc keccak crypt to final usr lib gcc pc cygwin pc cygwin bin ld cygdrive c users username downloads john bleeding jumbo src dynamic fmt c undefined reference to dynamicfunc keccak crypt to final usr lib gcc pc cygwin pc cygwin bin ld dynamic parser o dynamic parser rdata undefined reference to dynamicfunc keccak crypt append usr lib gcc pc cygwin pc cygwin bin ld dynamic parser o dynamic parser rdata undefined reference to dynamicfunc keccak crypt append usr lib gcc pc cygwin pc cygwin bin ld dynamic parser o dynamic parser rdata undefined reference to dynamicfunc keccak crypt at offset usr lib gcc pc cygwin pc cygwin bin ld dynamic parser o dynamic parser rdata undefined reference to dynamicfunc keccak crypt at offset usr lib gcc pc cygwin pc cygwin bin ld dynamic parser o dynamic parser rdata undefined reference to dynamicfunc keccak crypt at offset usr lib gcc pc cygwin pc cygwin bin ld dynamic parser o dynamic parser rdata undefined reference to dynamicfunc keccak crypt at offset usr lib gcc pc cygwin pc cygwin bin ld dynamic parser o dynamic parser rdata undefined reference to dynamicfunc keccak crypt overwrite usr lib gcc pc cygwin pc cygwin bin ld dynamic parser o dynamic parser rdata undefined reference to dynamicfunc keccak crypt overwrite usr lib gcc pc cygwin pc cygwin bin ld dynamic parser o dynamic parser rdata undefined reference to dynamicfunc keccak crypt overwrite usr lib gcc pc cygwin pc cygwin bin ld dynamic parser o dynamic parser rdata undefined reference to dynamicfunc keccak crypt overwrite usr lib gcc pc cygwin pc cygwin bin ld dynamic parser o dynamic parser rdata undefined reference to dynamicfunc keccak crypt to usr lib gcc pc cygwin pc cygwin bin ld dynamic parser o dynamic parser rdata undefined reference to dynamicfunc keccak crypt to usr lib gcc pc cygwin pc cygwin bin ld dynamic parser o dynamic parser rdata undefined reference to dynamicfunc keccak crypt to usr lib gcc pc cygwin pc cygwin bin ld dynamic parser o dynamic parser rdata undefined reference to dynamicfunc keccak crypt to usr lib gcc pc cygwin pc cygwin bin ld dynamic parser o dynamic parser rdata undefined reference to dynamicfunc keccak crypt to usr lib gcc pc cygwin pc cygwin bin ld dynamic parser o dynamic parser rdata undefined reference to dynamicfunc keccak crypt to usr lib gcc pc cygwin pc cygwin bin ld dynamic parser o dynamic parser rdata undefined reference to dynamicfunc keccak crypt to usr lib gcc pc cygwin pc cygwin bin ld dynamic parser o dynamic parser rdata undefined reference to dynamicfunc keccak crypt to usr lib gcc pc cygwin pc cygwin bin ld dynamic parser o dynamic parser rdata undefined reference to dynamicfunc keccak crypt to final usr lib gcc pc cygwin pc cygwin bin ld dynamic parser o dynamic parser rdata undefined reference to dynamicfunc keccak crypt to final usr lib gcc pc cygwin pc cygwin bin ld dynamic parser o dynamic parser rdata undefined reference to dynamicfunc keccak crypt append usr lib gcc pc cygwin pc cygwin bin ld dynamic parser o dynamic parser rdata undefined reference to dynamicfunc keccak crypt append usr lib gcc pc cygwin pc cygwin bin ld dynamic parser o dynamic parser rdata undefined reference to dynamicfunc keccak crypt at offset usr lib gcc pc cygwin pc cygwin bin ld dynamic parser o dynamic parser rdata undefined reference to dynamicfunc keccak crypt at offset usr lib gcc pc cygwin pc cygwin bin ld dynamic parser o dynamic parser rdata undefined reference to dynamicfunc keccak crypt at offset usr lib gcc pc cygwin pc cygwin bin ld dynamic parser o dynamic parser rdata undefined reference to dynamicfunc keccak crypt at offset usr lib gcc pc cygwin pc cygwin bin ld dynamic parser o dynamic parser rdata undefined reference to dynamicfunc keccak crypt overwrite usr lib gcc pc cygwin pc cygwin bin ld dynamic parser o dynamic parser rdata undefined reference to dynamicfunc keccak crypt overwrite usr lib gcc pc cygwin pc cygwin bin ld dynamic parser o dynamic parser rdata undefined reference to dynamicfunc keccak crypt overwrite usr lib gcc pc cygwin pc cygwin bin ld dynamic parser o dynamic parser rdata undefined reference to dynamicfunc keccak crypt overwrite usr lib gcc pc cygwin pc cygwin bin ld dynamic parser o dynamic parser rdata undefined reference to dynamicfunc keccak crypt to usr lib gcc pc cygwin pc cygwin bin ld dynamic parser o dynamic parser rdata undefined reference to dynamicfunc keccak crypt to usr lib gcc pc cygwin pc cygwin bin ld dynamic parser o dynamic parser rdata undefined reference to dynamicfunc keccak crypt to usr lib gcc pc cygwin pc cygwin bin ld dynamic parser o dynamic parser rdata undefined reference to dynamicfunc keccak crypt to usr lib gcc pc cygwin pc cygwin bin ld dynamic parser o dynamic parser rdata undefined reference to dynamicfunc keccak crypt to usr lib gcc pc cygwin pc cygwin bin ld dynamic parser o dynamic parser rdata undefined reference to dynamicfunc keccak crypt to usr lib gcc pc cygwin pc cygwin bin ld dynamic parser o dynamic parser rdata undefined reference to dynamicfunc keccak crypt to usr lib gcc pc cygwin pc cygwin bin ld dynamic parser o dynamic parser rdata undefined reference to dynamicfunc keccak crypt to usr lib gcc pc cygwin pc cygwin bin ld dynamic parser o dynamic parser rdata undefined reference to dynamicfunc keccak crypt to final usr lib gcc pc cygwin pc cygwin bin ld dynamic parser o dynamic parser rdata undefined reference to dynamicfunc keccak crypt to final usr lib gcc pc cygwin pc cygwin bin ld dynamic preloads o dynamic preloa data undefined reference to dynamicfunc keccak crypt append usr lib gcc pc cygwin pc cygwin bin ld dynamic preloads o dynamic preloa data undefined reference to dynamicfunc keccak crypt to final usr lib gcc pc cygwin pc cygwin bin ld dynamic preloads o dynamic preloa data undefined reference to dynamicfunc keccak crypt to final usr lib gcc pc cygwin pc cygwin bin ld dynamic preloads o dynamic preloa data undefined reference to dynamicfunc keccak crypt to final usr lib gcc pc cygwin pc cygwin bin ld dynamic preloads o dynamic preloa data undefined reference to dynamicfunc keccak crypt to final usr lib gcc pc cygwin pc cygwin bin ld dynamic preloads o dynamic preloa data undefined reference to dynamicfunc keccak crypt overwrite usr lib gcc pc cygwin pc cygwin bin ld dynamic preloads o dynamic preloa data undefined reference to dynamicfunc keccak crypt to final usr lib gcc pc cygwin pc cygwin bin ld dynamic preloads o dynamic preloa data undefined reference to dynamicfunc keccak crypt overwrite usr lib gcc pc cygwin pc cygwin bin ld dynamic preloads o dynamic preloa data undefined reference to dynamicfunc keccak crypt to final usr lib gcc pc cygwin pc cygwin bin ld dynamic preloads o dynamic preloa data undefined reference to dynamicfunc keccak crypt to final usr lib gcc pc cygwin pc cygwin bin ld dynamic preloads o dynamic preloa data undefined reference to dynamicfunc keccak crypt to final usr lib gcc pc cygwin pc cygwin bin ld dynamic preloads o dynamic preloa data undefined reference to dynamicfunc keccak crypt to final usr lib gcc pc cygwin pc cygwin bin ld dynamic preloads o dynamic preloa data undefined reference to dynamicfunc keccak crypt append usr lib gcc pc cygwin pc cygwin bin ld dynamic preloads o dynamic preloa data undefined reference to dynamicfunc keccak crypt to final usr lib gcc pc cygwin pc cygwin bin ld dynamic preloads o dynamic preloa data undefined reference to dynamicfunc keccak crypt to final usr lib gcc pc cygwin pc cygwin bin ld dynamic preloads o dynamic preloa data undefined reference to dynamicfunc keccak crypt to final usr lib gcc pc cygwin pc cygwin bin ld dynamic preloads o dynamic preloa data undefined reference to dynamicfunc keccak crypt to final usr lib gcc pc cygwin pc cygwin bin ld dynamic preloads o dynamic preloa data undefined reference to dynamicfunc keccak crypt overwrite usr lib gcc pc cygwin pc cygwin bin ld dynamic preloads o dynamic preloa data undefined reference to dynamicfunc keccak crypt to final usr lib gcc pc cygwin pc cygwin bin ld dynamic preloads o dynamic preloa data undefined reference to dynamicfunc keccak crypt overwrite usr lib gcc pc cygwin pc cygwin bin ld dynamic preloads o dynamic preloa data undefined reference to dynamicfunc keccak crypt to final usr lib gcc pc cygwin pc cygwin bin ld dynamic preloads o dynamic preloa data undefined reference to dynamicfunc keccak crypt to final usr lib gcc pc cygwin pc cygwin bin ld dynamic preloads o dynamic preloa data undefined reference to dynamicfunc keccak crypt to final usr lib gcc pc cygwin pc cygwin bin ld dynamic preloads o dynamic preloa data undefined reference to dynamicfunc keccak crypt to final error ld returned exit status make error make error ,1
243531,20407404983.0,IssuesEvent,2022-02-23 07:49:48,tendermint/spn,https://api.github.com/repos/tendermint/spn,opened,Investigate Table-Driven-Tests improvement to remove context initialization,test research,"https://github.com/tendermint/spn/pull/540#discussion_r811718090
Many unit tests we use in our codebase use Table Driven Tests (TDT)
The idea is to create a table of test cases containing `input, expected output` for better maintainability and visualisation.
In many of these tests, we perform first initialization before the table declaration. Most of the time, initialization of coordinators, chains, etc...
This context initialization makes our tests less in the idea of TDT where the whole purpose of a test case should be visualized in the test case itself and make the tests less visual and maintainable.
The idea would be to investigate how we could incorporate the context initialization in the test case itself as the initial state of the test case. For example, describing the list of coordinators in the test case, the coordinators are then initialized in the body of the TDT
Example:
```
[coordinators initialization]
[chains initialization]
foreach []testCase{
{
input
expectedOutput
}
....
} {
[assertion]
}
```
would become
```
foreach []testCase{
{
initialState{
coordinators
chains
}
input
expectedOutput
}
....
} {
[coordinators initialization]
[chains initialization]
[assertion]
}
```
The entire task would be big, the first step would be to determine if this change is relevant and where it could be applied",1.0,"Investigate Table-Driven-Tests improvement to remove context initialization - https://github.com/tendermint/spn/pull/540#discussion_r811718090
Many unit tests we use in our codebase use Table Driven Tests (TDT)
The idea is to create a table of test cases containing `input, expected output` for better maintainability and visualisation.
In many of these tests, we perform first initialization before the table declaration. Most of the time, initialization of coordinators, chains, etc...
This context initialization makes our tests less in the idea of TDT where the whole purpose of a test case should be visualized in the test case itself and make the tests less visual and maintainable.
The idea would be to investigate how we could incorporate the context initialization in the test case itself as the initial state of the test case. For example, describing the list of coordinators in the test case, the coordinators are then initialized in the body of the TDT
Example:
```
[coordinators initialization]
[chains initialization]
foreach []testCase{
{
input
expectedOutput
}
....
} {
[assertion]
}
```
would become
```
foreach []testCase{
{
initialState{
coordinators
chains
}
input
expectedOutput
}
....
} {
[coordinators initialization]
[chains initialization]
[assertion]
}
```
The entire task would be big, the first step would be to determine if this change is relevant and where it could be applied",0,investigate table driven tests improvement to remove context initialization many unit tests we use in our codebase use table driven tests tdt the idea is to create a table of test cases containing input expected output for better maintainability and visualisation in many of these tests we perform first initialization before the table declaration most of the time initialization of coordinators chains etc this context initialization makes our tests less in the idea of tdt where the whole purpose of a test case should be visualized in the test case itself and make the tests less visual and maintainable the idea would be to investigate how we could incorporate the context initialization in the test case itself as the initial state of the test case for example describing the list of coordinators in the test case the coordinators are then initialized in the body of the tdt example foreach testcase input expectedoutput would become foreach testcase initialstate coordinators chains input expectedoutput the entire task would be big the first step would be to determine if this change is relevant and where it could be applied,0
48243,6086178839.0,IssuesEvent,2017-06-17 21:51:42,elsevier-core-engineering/replicator,https://api.github.com/repos/elsevier-core-engineering/replicator,opened,Protect Worker Pool Node Running The Replicator Leader From Termination During Cluster Scale-In,bug core-design-change high-priority,"**Description**
With Replicator running as a Nomad job, it is now possible that we could initiate the termination of the worker pool node on which the Replicator leader is running during cluster scale in operations.
To protect against this, the least-allocated node discovery method should be modified to filter out the worker pool node that is running the current Replicator leader.
During initialization, if Replicator obtains leadership it should determine and store information about the worker pool node on which it is running and make this information available to the least-allocated node discovery method.",1.0,"Protect Worker Pool Node Running The Replicator Leader From Termination During Cluster Scale-In - **Description**
With Replicator running as a Nomad job, it is now possible that we could initiate the termination of the worker pool node on which the Replicator leader is running during cluster scale in operations.
To protect against this, the least-allocated node discovery method should be modified to filter out the worker pool node that is running the current Replicator leader.
During initialization, if Replicator obtains leadership it should determine and store information about the worker pool node on which it is running and make this information available to the least-allocated node discovery method.",0,protect worker pool node running the replicator leader from termination during cluster scale in description with replicator running as a nomad job it is now possible that we could initiate the termination of the worker pool node on which the replicator leader is running during cluster scale in operations to protect against this the least allocated node discovery method should be modified to filter out the worker pool node that is running the current replicator leader during initialization if replicator obtains leadership it should determine and store information about the worker pool node on which it is running and make this information available to the least allocated node discovery method ,0
1272,16990355815.0,IssuesEvent,2021-06-30 19:32:55,argoproj/argo-cd,https://api.github.com/repos/argoproj/argo-cd,closed,Simplify parametrization of Argo CD server processes,component:distribution enhancement type:supportability,"# Summary
We should simplify the way server processes can be parametrized, to make it easier for users to adapt Argo CD to their respective run time environments.
# Motivation
Currently, to change behaviour of Argo CD server processes deployed as Kubernetes workloads (`argocd-server`, `argocd-repo-server` and `argocd-application-controller`), it is necessary to edit the `Deployment` or `StatefulSet` resources and edit `.spec.template.spec.containers[*].command` to include necessary command line parameters.
This can become quite complex and error-prone, even when a config management tool like Kustomize is used. Without a config management tool, it can become outright frustrating - especially in case of upgrading using original/unedited manifests.
# Proposal
I think a good way to make this simpler would be to introduce a construct like the following:
1. Introduce an environment variable for each of the command line parameters in each of the CLIs whose name is `_`, for example for the command line switch `--insecure` in `argocd-server`, an environment variable `ARGOCD_SERVER_INSECURE` should exist.
1. If the command line parameter is given explicitly, its value will take precedence.
1. If the command line parameter is not given, but the variable is set, the variable's value will be used for the parameter.
1. If none of command line parameter or environment variable is set, the default value will be used
1. We either introduce new `ConfigMap` resources (e.g. `argocd-server-config`, `argocd-repo-server-config`, `argocd-application-controller-config`), introduce a single new `ConfigMap` resources (e.g. `argocd-startup-config`) or reuse existing `argocd-cm`. But since the latter is also watched by Argo CD for runtime configuration, we should go for one of the two former approaches.
1. The existing `Deployment` and `StatefulSet` resources will be modified with a series of `env` entries in `spec.template.spec.containers[*]`, mapping environment variables to entries in ConfigMap from previous step, e.g.:
```yaml
env:
- name: ARGOCD_SERVER_INSECURE
valueFrom:
configMapKeyRef:
name: argocd-server-config
key: argocd-server.insecure
optional: true
```
With this approach, users could simply set the parameters in the ConfigMap and perform a rolling restart of `Deployment` or `StatefulSet` to use the new parameters. Without much frustration.
Proof of concept already exists in `argocd-image-updater` codebase, refer to
* https://github.com/argoproj-labs/argocd-image-updater/blob/e3b13f16bfc543ffe98fac6b84b309fc8bf719ff/cmd/main.go#L455 for integration with Cobra CLI framework and
* https://github.com/argoproj-labs/argocd-image-updater/blob/e3b13f16bfc543ffe98fac6b84b309fc8bf719ff/manifests/base/deployment/argocd-image-updater-deployment.yaml#L24 for integration with the manifests
Contra:
* The workloads are not restarted automatically by Kubernetes upon a change of parametrization",True,"Simplify parametrization of Argo CD server processes - # Summary
We should simplify the way server processes can be parametrized, to make it easier for users to adapt Argo CD to their respective run time environments.
# Motivation
Currently, to change behaviour of Argo CD server processes deployed as Kubernetes workloads (`argocd-server`, `argocd-repo-server` and `argocd-application-controller`), it is necessary to edit the `Deployment` or `StatefulSet` resources and edit `.spec.template.spec.containers[*].command` to include necessary command line parameters.
This can become quite complex and error-prone, even when a config management tool like Kustomize is used. Without a config management tool, it can become outright frustrating - especially in case of upgrading using original/unedited manifests.
# Proposal
I think a good way to make this simpler would be to introduce a construct like the following:
1. Introduce an environment variable for each of the command line parameters in each of the CLIs whose name is `_`, for example for the command line switch `--insecure` in `argocd-server`, an environment variable `ARGOCD_SERVER_INSECURE` should exist.
1. If the command line parameter is given explicitly, its value will take precedence.
1. If the command line parameter is not given, but the variable is set, the variable's value will be used for the parameter.
1. If none of command line parameter or environment variable is set, the default value will be used
1. We either introduce new `ConfigMap` resources (e.g. `argocd-server-config`, `argocd-repo-server-config`, `argocd-application-controller-config`), introduce a single new `ConfigMap` resources (e.g. `argocd-startup-config`) or reuse existing `argocd-cm`. But since the latter is also watched by Argo CD for runtime configuration, we should go for one of the two former approaches.
1. The existing `Deployment` and `StatefulSet` resources will be modified with a series of `env` entries in `spec.template.spec.containers[*]`, mapping environment variables to entries in ConfigMap from previous step, e.g.:
```yaml
env:
- name: ARGOCD_SERVER_INSECURE
valueFrom:
configMapKeyRef:
name: argocd-server-config
key: argocd-server.insecure
optional: true
```
With this approach, users could simply set the parameters in the ConfigMap and perform a rolling restart of `Deployment` or `StatefulSet` to use the new parameters. Without much frustration.
Proof of concept already exists in `argocd-image-updater` codebase, refer to
* https://github.com/argoproj-labs/argocd-image-updater/blob/e3b13f16bfc543ffe98fac6b84b309fc8bf719ff/cmd/main.go#L455 for integration with Cobra CLI framework and
* https://github.com/argoproj-labs/argocd-image-updater/blob/e3b13f16bfc543ffe98fac6b84b309fc8bf719ff/manifests/base/deployment/argocd-image-updater-deployment.yaml#L24 for integration with the manifests
Contra:
* The workloads are not restarted automatically by Kubernetes upon a change of parametrization",1,simplify parametrization of argo cd server processes summary we should simplify the way server processes can be parametrized to make it easier for users to adapt argo cd to their respective run time environments motivation currently to change behaviour of argo cd server processes deployed as kubernetes workloads argocd server argocd repo server and argocd application controller it is necessary to edit the deployment or statefulset resources and edit spec template spec containers command to include necessary command line parameters this can become quite complex and error prone even when a config management tool like kustomize is used without a config management tool it can become outright frustrating especially in case of upgrading using original unedited manifests proposal i think a good way to make this simpler would be to introduce a construct like the following introduce an environment variable for each of the command line parameters in each of the clis whose name is for example for the command line switch insecure in argocd server an environment variable argocd server insecure should exist if the command line parameter is given explicitly its value will take precedence if the command line parameter is not given but the variable is set the variable s value will be used for the parameter if none of command line parameter or environment variable is set the default value will be used we either introduce new configmap resources e g argocd server config argocd repo server config argocd application controller config introduce a single new configmap resources e g argocd startup config or reuse existing argocd cm but since the latter is also watched by argo cd for runtime configuration we should go for one of the two former approaches the existing deployment and statefulset resources will be modified with a series of env entries in spec template spec containers mapping environment variables to entries in configmap from previous step e g yaml env name argocd server insecure valuefrom configmapkeyref name argocd server config key argocd server insecure optional true with this approach users could simply set the parameters in the configmap and perform a rolling restart of deployment or statefulset to use the new parameters without much frustration proof of concept already exists in argocd image updater codebase refer to for integration with cobra cli framework and for integration with the manifests contra the workloads are not restarted automatically by kubernetes upon a change of parametrization,1
363493,25453996769.0,IssuesEvent,2022-11-24 12:46:15,jkutkutOrg/Java-HR_App,https://api.github.com/repos/jkutkutOrg/Java-HR_App,closed,Define repo structure,documentation,"Install latest JAVA JDK 8 from [here](http://www.oracle.com/technetwork/java/javase/downloads/index.html).
Install IntelliJ, Eclipse or Netbeans IDE (I prefer IntelliJ)
Install Scene Builder from [here](http://gluonhq.com/open-source/scene-builder/).
Install Oracle Express Edition from [here](http://www.oracle.com/technetwork/database/database-technologies/express-edition/downloads/index.html).",1.0,"Define repo structure - Install latest JAVA JDK 8 from [here](http://www.oracle.com/technetwork/java/javase/downloads/index.html).
Install IntelliJ, Eclipse or Netbeans IDE (I prefer IntelliJ)
Install Scene Builder from [here](http://gluonhq.com/open-source/scene-builder/).
Install Oracle Express Edition from [here](http://www.oracle.com/technetwork/database/database-technologies/express-edition/downloads/index.html).",0,define repo structure install latest java jdk from install intellij eclipse or netbeans ide i prefer intellij install scene builder from install oracle express edition from ,0
1565,23018860845.0,IssuesEvent,2022-07-22 01:33:55,redpanda-data/redpanda,https://api.github.com/repos/redpanda-data/redpanda,opened,More named semaphores: mutex etc.,kind/enhance supportability,"Continuing on the work in #5490 to use named semaphores to help with debugging timed out / broken semaphores...
- Move utils/mutex.h to ssx/semaphore.h.
- Change constructor to take a name string and pass that through to the underlying semaphore.
- Do the same for any other semaphore wrappers that are used in different contexts. timed_mutex.h, etc..",True,"More named semaphores: mutex etc. - Continuing on the work in #5490 to use named semaphores to help with debugging timed out / broken semaphores...
- Move utils/mutex.h to ssx/semaphore.h.
- Change constructor to take a name string and pass that through to the underlying semaphore.
- Do the same for any other semaphore wrappers that are used in different contexts. timed_mutex.h, etc..",1,more named semaphores mutex etc continuing on the work in to use named semaphores to help with debugging timed out broken semaphores move utils mutex h to ssx semaphore h change constructor to take a name string and pass that through to the underlying semaphore do the same for any other semaphore wrappers that are used in different contexts timed mutex h etc ,1
1017,12964928623.0,IssuesEvent,2020-07-20 21:21:54,Dawoodoz/DFPSR,https://api.github.com/repos/Dawoodoz/DFPSR,closed,Porting to Win32,enhancement portability,"**What to do**
A stable Windows port of the window wrapper in source/windowManagers would allow running on Microsoft Windows natively in full speed. This is done by defining the createBackendWindow function, which the DFPSR library will be calling to create a window. Create a class inheriting from BackendWindow and implement the virtual methods closely to how the X11 version works. Simply uploading the image being sent, trying to handle resize of the canvas without crashing, and taking input from mouse and keyboard. Full-screen and multi-threaded upload can wait for another pull request if it becomes difficult.
**How to do it**
Stability comes first, so don't try to force the screen's resolution into a dimension that the screen might not be able to handle. Just maximize a border-less window the safe way without exclusive access to the screen. A good system will recognize this as full-screen and get the same level of optimization but without the dangers of incompatible forced settings on unknown display devices. Even if Windows usually comes with pre-installed GPU drivers that does up-scaling, this often results in bi-linear interpolation removing the game's retro-look and the games should have the same look on different platforms. The library already have image upscaling built-in and will send the up-scaled image. All GUI stuff is also handled by the library, so the window backend just feeds input to the message queue while mapping to portable key codes.
**Only system dependencies**
Because this is a zero-dependency library which should be possible to just compile and run together with a program, dynamic linking to third-party media layers is not allowed. The compiled applications should be possible to run on a clean install of the operating system without installing any other software. No other libraries, no 3D accelerated graphics drivers. Users of this library should be able to use this for creating driver installers running before anything else in the system.
**Compiling on Windows**
Last time I compiled on Windows, I made a CodeBlocks project, included the whole content of the DFPSR library, included the program's project folder, created a module in windowManagers, selected G++14 with all warnings and included the Windows libraries. Just having the window module and a list of linked libraries would be enough to see this as completed. Improving the cross-platform build process can be another task.
**Compilers**
Trying to Compile with Microsoft's C++ compiler will fail because it's not standard C++14. The library has compiled with CLang before, but it will likely have its own opinions about style being contrary to GCC's suggestions, so sticking with the latest version of GCC is the easiest way to avoid a mess of ifdefs for each compiler version.",True,"Porting to Win32 - **What to do**
A stable Windows port of the window wrapper in source/windowManagers would allow running on Microsoft Windows natively in full speed. This is done by defining the createBackendWindow function, which the DFPSR library will be calling to create a window. Create a class inheriting from BackendWindow and implement the virtual methods closely to how the X11 version works. Simply uploading the image being sent, trying to handle resize of the canvas without crashing, and taking input from mouse and keyboard. Full-screen and multi-threaded upload can wait for another pull request if it becomes difficult.
**How to do it**
Stability comes first, so don't try to force the screen's resolution into a dimension that the screen might not be able to handle. Just maximize a border-less window the safe way without exclusive access to the screen. A good system will recognize this as full-screen and get the same level of optimization but without the dangers of incompatible forced settings on unknown display devices. Even if Windows usually comes with pre-installed GPU drivers that does up-scaling, this often results in bi-linear interpolation removing the game's retro-look and the games should have the same look on different platforms. The library already have image upscaling built-in and will send the up-scaled image. All GUI stuff is also handled by the library, so the window backend just feeds input to the message queue while mapping to portable key codes.
**Only system dependencies**
Because this is a zero-dependency library which should be possible to just compile and run together with a program, dynamic linking to third-party media layers is not allowed. The compiled applications should be possible to run on a clean install of the operating system without installing any other software. No other libraries, no 3D accelerated graphics drivers. Users of this library should be able to use this for creating driver installers running before anything else in the system.
**Compiling on Windows**
Last time I compiled on Windows, I made a CodeBlocks project, included the whole content of the DFPSR library, included the program's project folder, created a module in windowManagers, selected G++14 with all warnings and included the Windows libraries. Just having the window module and a list of linked libraries would be enough to see this as completed. Improving the cross-platform build process can be another task.
**Compilers**
Trying to Compile with Microsoft's C++ compiler will fail because it's not standard C++14. The library has compiled with CLang before, but it will likely have its own opinions about style being contrary to GCC's suggestions, so sticking with the latest version of GCC is the easiest way to avoid a mess of ifdefs for each compiler version.",1,porting to what to do a stable windows port of the window wrapper in source windowmanagers would allow running on microsoft windows natively in full speed this is done by defining the createbackendwindow function which the dfpsr library will be calling to create a window create a class inheriting from backendwindow and implement the virtual methods closely to how the version works simply uploading the image being sent trying to handle resize of the canvas without crashing and taking input from mouse and keyboard full screen and multi threaded upload can wait for another pull request if it becomes difficult how to do it stability comes first so don t try to force the screen s resolution into a dimension that the screen might not be able to handle just maximize a border less window the safe way without exclusive access to the screen a good system will recognize this as full screen and get the same level of optimization but without the dangers of incompatible forced settings on unknown display devices even if windows usually comes with pre installed gpu drivers that does up scaling this often results in bi linear interpolation removing the game s retro look and the games should have the same look on different platforms the library already have image upscaling built in and will send the up scaled image all gui stuff is also handled by the library so the window backend just feeds input to the message queue while mapping to portable key codes only system dependencies because this is a zero dependency library which should be possible to just compile and run together with a program dynamic linking to third party media layers is not allowed the compiled applications should be possible to run on a clean install of the operating system without installing any other software no other libraries no accelerated graphics drivers users of this library should be able to use this for creating driver installers running before anything else in the system compiling on windows last time i compiled on windows i made a codeblocks project included the whole content of the dfpsr library included the program s project folder created a module in windowmanagers selected g with all warnings and included the windows libraries just having the window module and a list of linked libraries would be enough to see this as completed improving the cross platform build process can be another task compilers trying to compile with microsoft s c compiler will fail because it s not standard c the library has compiled with clang before but it will likely have its own opinions about style being contrary to gcc s suggestions so sticking with the latest version of gcc is the easiest way to avoid a mess of ifdefs for each compiler version ,1
413,6556143978.0,IssuesEvent,2017-09-06 13:11:53,Shinmera/portacle,https://api.github.com/repos/Shinmera/portacle,closed,Mac OS version,portability,"""You can’t use this version of the application ..."" error when launching portacle. It says it needs 10.12. I've only noticed since I have El Capitan on my laptop and Sierra on desktop. Is this really necessary? If not, it could be solved by using a gcc flag -mmacosx-version-min=10.7 (10.7 is reasonable enough, no?).",True,"Mac OS version - ""You can’t use this version of the application ..."" error when launching portacle. It says it needs 10.12. I've only noticed since I have El Capitan on my laptop and Sierra on desktop. Is this really necessary? If not, it could be solved by using a gcc flag -mmacosx-version-min=10.7 (10.7 is reasonable enough, no?).",1,mac os version you can’t use this version of the application error when launching portacle it says it needs i ve only noticed since i have el capitan on my laptop and sierra on desktop is this really necessary if not it could be solved by using a gcc flag mmacosx version min is reasonable enough no ,1
1523,22156004703.0,IssuesEvent,2022-06-03 22:46:58,apache/beam,https://api.github.com/repos/apache/beam,opened,Python process environment factory,portability P3 runner-flink task sdk-py-harness portability-flink,"Provide an easy to use process environment factory that allows for Python worker execution as Docker alternative. Note that we have a base that the user can configure and an attempt to utilize it for the Python Flink post commit test. However, that setup is specific to the Jenkins environment.
Imported from Jira [BEAM-6147](https://issues.apache.org/jira/browse/BEAM-6147). Original Jira may contain additional context.
Reported by: thw.",True,"Python process environment factory - Provide an easy to use process environment factory that allows for Python worker execution as Docker alternative. Note that we have a base that the user can configure and an attempt to utilize it for the Python Flink post commit test. However, that setup is specific to the Jenkins environment.
Imported from Jira [BEAM-6147](https://issues.apache.org/jira/browse/BEAM-6147). Original Jira may contain additional context.
Reported by: thw.",1,python process environment factory provide an easy to use process environment factory that allows for python worker execution as docker alternative note that we have a base that the user can configure and an attempt to utilize it for the python flink post commit test however that setup is specific to the jenkins environment imported from jira original jira may contain additional context reported by thw ,1
1539,22157992922.0,IssuesEvent,2022-06-04 04:01:18,apache/beam,https://api.github.com/repos/apache/beam,opened,Stop job service when pipeline execution finishes,P3 improvement runner-flink portability-flink,"Currently, job servers are shut down when the Python script exits [1]. A better long-term solution would be to shut them down instead when the pipeline is finished executing, such as [2]. This will put resource management in a common code path that is less error-prone.
[1] [https://github.com/apache/beam/blob/c5f43342f914fc8ff367b86fb9294c38436ed3ce/sdks/python/apache_beam/runners/portability/job_server.py#L73](https://github.com/apache/beam/blob/c5f43342f914fc8ff367b86fb9294c38436ed3ce/sdks/python/apache_beam/runners/portability/job_server.py#L73)
[2] [https://github.com/apache/beam/blob/c5f43342f914fc8ff367b86fb9294c38436ed3ce/sdks/python/apache_beam/runners/portability/portable_runner.py#L451](https://github.com/apache/beam/blob/c5f43342f914fc8ff367b86fb9294c38436ed3ce/sdks/python/apache_beam/runners/portability/portable_runner.py#L451)
Imported from Jira [BEAM-8103](https://issues.apache.org/jira/browse/BEAM-8103). Original Jira may contain additional context.
Reported by: ibzib.",True,"Stop job service when pipeline execution finishes - Currently, job servers are shut down when the Python script exits [1]. A better long-term solution would be to shut them down instead when the pipeline is finished executing, such as [2]. This will put resource management in a common code path that is less error-prone.
[1] [https://github.com/apache/beam/blob/c5f43342f914fc8ff367b86fb9294c38436ed3ce/sdks/python/apache_beam/runners/portability/job_server.py#L73](https://github.com/apache/beam/blob/c5f43342f914fc8ff367b86fb9294c38436ed3ce/sdks/python/apache_beam/runners/portability/job_server.py#L73)
[2] [https://github.com/apache/beam/blob/c5f43342f914fc8ff367b86fb9294c38436ed3ce/sdks/python/apache_beam/runners/portability/portable_runner.py#L451](https://github.com/apache/beam/blob/c5f43342f914fc8ff367b86fb9294c38436ed3ce/sdks/python/apache_beam/runners/portability/portable_runner.py#L451)
Imported from Jira [BEAM-8103](https://issues.apache.org/jira/browse/BEAM-8103). Original Jira may contain additional context.
Reported by: ibzib.",1,stop job service when pipeline execution finishes currently job servers are shut down when the python script exits a better long term solution would be to shut them down instead when the pipeline is finished executing such as this will put resource management in a common code path that is less error prone imported from jira original jira may contain additional context reported by ibzib ,1
734,9903948954.0,IssuesEvent,2019-06-27 08:04:24,DECODEproject/zenroom,https://api.github.com/repos/DECODEproject/zenroom,closed,Pluggable RNG callback for certain platform ports,portability,"It should be possible to provide a callback to a custom external RNG. It is required by some platforms who don't support the standard linux/osx/win sources already implemented, for instance the unikernel Cortex port, but also others ports to different language bindings.
Perhaps this will be also a good occasion to formalise better the provision of callbacks for the print to stdout/stderr, as for instance those plugged by preprocessor's `#define`s for javascript/wasm. ",True,"Pluggable RNG callback for certain platform ports - It should be possible to provide a callback to a custom external RNG. It is required by some platforms who don't support the standard linux/osx/win sources already implemented, for instance the unikernel Cortex port, but also others ports to different language bindings.
Perhaps this will be also a good occasion to formalise better the provision of callbacks for the print to stdout/stderr, as for instance those plugged by preprocessor's `#define`s for javascript/wasm. ",1,pluggable rng callback for certain platform ports it should be possible to provide a callback to a custom external rng it is required by some platforms who don t support the standard linux osx win sources already implemented for instance the unikernel cortex port but also others ports to different language bindings perhaps this will be also a good occasion to formalise better the provision of callbacks for the print to stdout stderr as for instance those plugged by preprocessor s define s for javascript wasm ,1
176821,13654490316.0,IssuesEvent,2020-09-27 17:39:40,prokuranepal/DMS_React,https://api.github.com/repos/prokuranepal/DMS_React,opened,Tests for Weather components,good first issue react tests," WeatherDetail, WeatherList need tests
WeatherDetail.js
WeatherList.js
The tests should perform at least
Component testing for the components present and their numbers
Simulation for events like Button Press
Props testing
The project tests are based on jest and enzyme. Tests like test1 or test2 could serve as references.
",1.0,"Tests for Weather components - WeatherDetail, WeatherList need tests
WeatherDetail.js
WeatherList.js
The tests should perform at least
Component testing for the components present and their numbers
Simulation for events like Button Press
Props testing
The project tests are based on jest and enzyme. Tests like test1 or test2 could serve as references.
",0,tests for weather components weatherdetail weatherlist need tests the tests should perform at least component testing for the components present and their numbers simulation for events like button press props testing the project tests are based on jest and enzyme tests like or could serve as references ,0
312,5732077001.0,IssuesEvent,2017-04-21 14:02:40,mongoclient/mongoclient,https://api.github.com/repos/mongoclient/mongoclient,closed,Loading libssl.1.0.0 fails,portable-linux potential bug,"I'm trying to run the portable linux-x64 2.0.0 version and it gets stuck on the please wait screen.
I get the following error when running from terminal:
```
[MONGOCLIENT] [MONGOD-STDERR] /home/myuser/linux-portable-x64/resources/app/bin/mongod: error while loading shared libraries: libssl.so.1.0.0: cannot open shared object file: No such file or directory
[MONGOCLIENT] [MONGOD-EXIT] 127
```
I'm using fedora 25 with openssl 1.0.2k installed.
I tried linking libssl.so.1.0.0 to libssl.so with no success.
## Possible Solution
Also is it possible to make mongoclient use my systems running mongod instead of running it's own mongod?
## Your Environment):
portable linux-x64 2.0.0 version electron app
Fedora Linux 25 with openssl and openssl-devel 1.0.2k",True,"Loading libssl.1.0.0 fails - I'm trying to run the portable linux-x64 2.0.0 version and it gets stuck on the please wait screen.
I get the following error when running from terminal:
```
[MONGOCLIENT] [MONGOD-STDERR] /home/myuser/linux-portable-x64/resources/app/bin/mongod: error while loading shared libraries: libssl.so.1.0.0: cannot open shared object file: No such file or directory
[MONGOCLIENT] [MONGOD-EXIT] 127
```
I'm using fedora 25 with openssl 1.0.2k installed.
I tried linking libssl.so.1.0.0 to libssl.so with no success.
## Possible Solution
Also is it possible to make mongoclient use my systems running mongod instead of running it's own mongod?
## Your Environment):
portable linux-x64 2.0.0 version electron app
Fedora Linux 25 with openssl and openssl-devel 1.0.2k",1,loading libssl fails i m trying to run the portable linux version and it gets stuck on the please wait screen i get the following error when running from terminal home myuser linux portable resources app bin mongod error while loading shared libraries libssl so cannot open shared object file no such file or directory i m using fedora with openssl installed i tried linking libssl so to libssl so with no success possible solution also is it possible to make mongoclient use my systems running mongod instead of running it s own mongod your environment portable linux version electron app fedora linux with openssl and openssl devel ,1
809,10546870836.0,IssuesEvent,2019-10-02 22:48:27,magnumripper/JohnTheRipper,https://api.github.com/repos/magnumripper/JohnTheRipper,closed,OpenCL formats failing on macOS with Intel HD Graphics 630,notes/external issues portability,"New problems in Mojave (or since last time I checked)
See also #3235, #3434
```
Device 1: Intel(R) HD Graphics 630
Testing: ansible-opencl, Ansible Vault [PBKDF2-SHA256 HMAC-SHA256 OpenCL]... FAILED (cmp_all(49))
Testing: axcrypt-opencl [SHA1 AES OpenCL]... FAILED (get_key(6))
Testing: EncFS-opencl [PBKDF2-SHA1 AES OpenCL]... FAILED (cmp_all(1))
Testing: OpenBSD-SoftRAID-opencl [PBKDF2-SHA1 AES OpenCL]... FAILED (cmp_all(1))
Testing: telegram-opencl [PBKDF2-SHA1 AES OpenCL]... FAILED (cmp_all(2))
Testing: wpapsk-opencl, WPA/WPA2/PMF/PMKID PSK [PBKDF2-SHA1 OpenCL]... FAILED (cmp_all(10))
Testing: wpapsk-pmk-opencl, WPA/WPA2/PMF/PMKID master key [MD5/SHA-1/SHA-2 OpenCL]... FAILED (cmp_all(3))
7 out of 83 tests have FAILED
```",True,"OpenCL formats failing on macOS with Intel HD Graphics 630 - New problems in Mojave (or since last time I checked)
See also #3235, #3434
```
Device 1: Intel(R) HD Graphics 630
Testing: ansible-opencl, Ansible Vault [PBKDF2-SHA256 HMAC-SHA256 OpenCL]... FAILED (cmp_all(49))
Testing: axcrypt-opencl [SHA1 AES OpenCL]... FAILED (get_key(6))
Testing: EncFS-opencl [PBKDF2-SHA1 AES OpenCL]... FAILED (cmp_all(1))
Testing: OpenBSD-SoftRAID-opencl [PBKDF2-SHA1 AES OpenCL]... FAILED (cmp_all(1))
Testing: telegram-opencl [PBKDF2-SHA1 AES OpenCL]... FAILED (cmp_all(2))
Testing: wpapsk-opencl, WPA/WPA2/PMF/PMKID PSK [PBKDF2-SHA1 OpenCL]... FAILED (cmp_all(10))
Testing: wpapsk-pmk-opencl, WPA/WPA2/PMF/PMKID master key [MD5/SHA-1/SHA-2 OpenCL]... FAILED (cmp_all(3))
7 out of 83 tests have FAILED
```",1,opencl formats failing on macos with intel hd graphics new problems in mojave or since last time i checked see also device intel r hd graphics testing ansible opencl ansible vault failed cmp all testing axcrypt opencl failed get key testing encfs opencl failed cmp all testing openbsd softraid opencl failed cmp all testing telegram opencl failed cmp all testing wpapsk opencl wpa pmf pmkid psk failed cmp all testing wpapsk pmk opencl wpa pmf pmkid master key failed cmp all out of tests have failed ,1
1274,17018701024.0,IssuesEvent,2021-07-02 15:28:15,TYPO3-Solr/ext-solr,https://api.github.com/repos/TYPO3-Solr/ext-solr,closed,[BUG] Faked TSFE does not set applicationType in request,BACKPORTABLE,"**Describe the bug**
When initializing the TSFE a TYPO3_REQUEST is also initialized.
A request must always have a applicationType attribute.
If no applicationType is given `ApplicationType::fromRequest($GLOBALS['TYPO3_REQUEST'])`
can't bee used and throws an exception.
**To Reproduce**
Call `ApplicationType::fromRequest($GLOBALS['TYPO3_REQUEST'])` on the
`$GLOBALS['TYPO3_REQUEST']` that is initialized in
`Classes/FrontendEnvironment/Tsfe.php`
This happens for example in https://github.com/networkteam/sentry_client/blob/master/Classes/Client.php#L78
**Expected behavior**
A valid TYPO3_REQUEST should be initialized
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Used versions (please complete the following information):**
- TYPO3 Version: 10.4.16
- EXT:solr Version: 11.0.3
",True,"[BUG] Faked TSFE does not set applicationType in request - **Describe the bug**
When initializing the TSFE a TYPO3_REQUEST is also initialized.
A request must always have a applicationType attribute.
If no applicationType is given `ApplicationType::fromRequest($GLOBALS['TYPO3_REQUEST'])`
can't bee used and throws an exception.
**To Reproduce**
Call `ApplicationType::fromRequest($GLOBALS['TYPO3_REQUEST'])` on the
`$GLOBALS['TYPO3_REQUEST']` that is initialized in
`Classes/FrontendEnvironment/Tsfe.php`
This happens for example in https://github.com/networkteam/sentry_client/blob/master/Classes/Client.php#L78
**Expected behavior**
A valid TYPO3_REQUEST should be initialized
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Used versions (please complete the following information):**
- TYPO3 Version: 10.4.16
- EXT:solr Version: 11.0.3
",1, faked tsfe does not set applicationtype in request describe the bug when initializing the tsfe a request is also initialized a request must always have a applicationtype attribute if no applicationtype is given applicationtype fromrequest globals can t bee used and throws an exception to reproduce call applicationtype fromrequest globals on the globals that is initialized in classes frontendenvironment tsfe php this happens for example in expected behavior a valid request should be initialized screenshots if applicable add screenshots to help explain your problem used versions please complete the following information version ext solr version ,1
163508,13920506105.0,IssuesEvent,2020-10-21 10:32:17,dry-python/returns,https://api.github.com/repos/dry-python/returns,closed,Interfaces documentation,documentation,"This issue is to track what interface we have to document yet, and to split the PRs as well into multiples instead of one with all interfaces:
- [x] Mappable
- [x] Bindable
- [ ] Applicative
- [ ] Container",1.0,"Interfaces documentation - This issue is to track what interface we have to document yet, and to split the PRs as well into multiples instead of one with all interfaces:
- [x] Mappable
- [x] Bindable
- [ ] Applicative
- [ ] Container",0,interfaces documentation this issue is to track what interface we have to document yet and to split the prs as well into multiples instead of one with all interfaces mappable bindable applicative container,0
226,4629425776.0,IssuesEvent,2016-09-28 09:13:11,ocaml/opam-repository,https://api.github.com/repos/ocaml/opam-repository,closed,Package zarith 1.4.1 fails to install in Cygwin+MinGW,portability,"This is related to #5588 : because `gmp.h` is in `/usr/include` and not `/usr/local/include`, `zarith` also fails to find the `gmp` package.
Incidentally, the `opam` file already considers some OSes in a special manner (openbsd, freebsd and darwin), so it suffices to consider cygwin as one of these ""special"" OSes. The `CFLAGS` variable will then be set accordingly.
A second issue I had with zarith 1.4.1 was line 242 in the `configure` file:
if test ""$ocamllibdir"" = ""auto""; then ocamllibdir=`ocamlc -where`; fi
Due to Windows (and possibly a MinGW OCaml compiler) shenanigans, `ocamlc` introduces a `\r` at the end of the command, and so the `ocamllibdir` variable contains the `\r` which prevents the rest from working. I hacked a `| tr -d '\r'` after the `-where` and it worked, but it seems there is already a `echo_n()` function for that.
I didn't submit a pull request because I don't know the exact right way to fix these issues (e.g. I don't have time now to fix them the proper way), but after doing both these patches (and patching my `conf-gmp` as mentioned in #5588) I was able to install zarith on my Cygwin.
I didn't test its actual *usage* though.",True,"Package zarith 1.4.1 fails to install in Cygwin+MinGW - This is related to #5588 : because `gmp.h` is in `/usr/include` and not `/usr/local/include`, `zarith` also fails to find the `gmp` package.
Incidentally, the `opam` file already considers some OSes in a special manner (openbsd, freebsd and darwin), so it suffices to consider cygwin as one of these ""special"" OSes. The `CFLAGS` variable will then be set accordingly.
A second issue I had with zarith 1.4.1 was line 242 in the `configure` file:
if test ""$ocamllibdir"" = ""auto""; then ocamllibdir=`ocamlc -where`; fi
Due to Windows (and possibly a MinGW OCaml compiler) shenanigans, `ocamlc` introduces a `\r` at the end of the command, and so the `ocamllibdir` variable contains the `\r` which prevents the rest from working. I hacked a `| tr -d '\r'` after the `-where` and it worked, but it seems there is already a `echo_n()` function for that.
I didn't submit a pull request because I don't know the exact right way to fix these issues (e.g. I don't have time now to fix them the proper way), but after doing both these patches (and patching my `conf-gmp` as mentioned in #5588) I was able to install zarith on my Cygwin.
I didn't test its actual *usage* though.",1,package zarith fails to install in cygwin mingw this is related to because gmp h is in usr include and not usr local include zarith also fails to find the gmp package incidentally the opam file already considers some oses in a special manner openbsd freebsd and darwin so it suffices to consider cygwin as one of these special oses the cflags variable will then be set accordingly a second issue i had with zarith was line in the configure file if test ocamllibdir auto then ocamllibdir ocamlc where fi due to windows and possibly a mingw ocaml compiler shenanigans ocamlc introduces a r at the end of the command and so the ocamllibdir variable contains the r which prevents the rest from working i hacked a tr d r after the where and it worked but it seems there is already a echo n function for that i didn t submit a pull request because i don t know the exact right way to fix these issues e g i don t have time now to fix them the proper way but after doing both these patches and patching my conf gmp as mentioned in i was able to install zarith on my cygwin i didn t test its actual usage though ,1
147307,13205642297.0,IssuesEvent,2020-08-14 18:23:50,DS4PS/cpp-526-sum-2020,https://api.github.com/repos/DS4PS/cpp-526-sum-2020,opened,Chapter 1 - Arithmetic in R and Function sum(),documentation final-dashboard,"I'm having issues with function `sum()` and `NA` values.
**Expectation:** I expected to get the sum of all values in variable `x`.
",1.0,"Chapter 1 - Arithmetic in R and Function sum() - I'm having issues with function `sum()` and `NA` values.
**Expectation:** I expected to get the sum of all values in variable `x`.
",0,chapter arithmetic in r and function sum i m having issues with function sum and na values expectation i expected to get the sum of all values in variable x ,0
88487,10572714114.0,IssuesEvent,2019-10-07 10:11:46,StefanNieuwenhuis/databindr,https://api.github.com/repos/StefanNieuwenhuis/databindr,closed,Add proper readme,documentation,"As user I want to know how to use this library, so I want to be informed through an elaborate readme.",1.0,"Add proper readme - As user I want to know how to use this library, so I want to be informed through an elaborate readme.",0,add proper readme as user i want to know how to use this library so i want to be informed through an elaborate readme ,0
267085,8378961435.0,IssuesEvent,2018-10-06 19:35:33,swcarpentry/amy,https://api.github.com/repos/swcarpentry/amy,closed,open training should be an option for assigning training requests to ttt events,component: user interface (UI) priority: essential type: bug,"Related to https://github.com/swcarpentry/amy/issues/1055
When matching training requests to a ttt event from the training request page, there is no check box to mark them as open training applicants.

This functionality **does** negatively affect usability as @sheraaron's workflow involves going to the request page and assigning people to trainings in bulk.
@karenword @maneesha ",1.0,"open training should be an option for assigning training requests to ttt events - Related to https://github.com/swcarpentry/amy/issues/1055
When matching training requests to a ttt event from the training request page, there is no check box to mark them as open training applicants.

This functionality **does** negatively affect usability as @sheraaron's workflow involves going to the request page and assigning people to trainings in bulk.
@karenword @maneesha ",0,open training should be an option for assigning training requests to ttt events related to when matching training requests to a ttt event from the training request page there is no check box to mark them as open training applicants this functionality does negatively affect usability as sheraaron s workflow involves going to the request page and assigning people to trainings in bulk karenword maneesha ,0
193477,14653875159.0,IssuesEvent,2020-12-28 07:14:06,github-vet/rangeloop-pointer-findings,https://api.github.com/repos/github-vet/rangeloop-pointer-findings,closed,coreos/fleet: fleetctl/destroy_test.go; 25 LoC,fresh small test,"
Found a possible issue in [coreos/fleet](https://www.github.com/coreos/fleet) at [fleetctl/destroy_test.go](https://github.com/coreos/fleet/blob/4522498327e92ffe6fa24eaa087c73e5af4adb53/fleetctl/destroy_test.go#L67-L91)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.
> range-loop variable r used in defer or goroutine at line 76
[Click here to see the code in its original context.](https://github.com/coreos/fleet/blob/4522498327e92ffe6fa24eaa087c73e5af4adb53/fleetctl/destroy_test.go#L67-L91)
Click here to show the 25 line(s) of Go which triggered the analyzer.
```go
for _, r := range results {
var wg sync.WaitGroup
errchan := make(chan error)
cAPI = newFakeRegistryForCommands(unitPrefix, len(r.units), false)
wg.Add(2)
go func() {
defer wg.Done()
doDestroyUnits(t, r, errchan)
}()
go func() {
defer wg.Done()
doDestroyUnits(t, r, errchan)
}()
go func() {
wg.Wait()
close(errchan)
}()
for err := range errchan {
t.Errorf(""%v"", err)
}
}
```
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: 4522498327e92ffe6fa24eaa087c73e5af4adb53
",1.0,"coreos/fleet: fleetctl/destroy_test.go; 25 LoC -
Found a possible issue in [coreos/fleet](https://www.github.com/coreos/fleet) at [fleetctl/destroy_test.go](https://github.com/coreos/fleet/blob/4522498327e92ffe6fa24eaa087c73e5af4adb53/fleetctl/destroy_test.go#L67-L91)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.
> range-loop variable r used in defer or goroutine at line 76
[Click here to see the code in its original context.](https://github.com/coreos/fleet/blob/4522498327e92ffe6fa24eaa087c73e5af4adb53/fleetctl/destroy_test.go#L67-L91)
Click here to show the 25 line(s) of Go which triggered the analyzer.
```go
for _, r := range results {
var wg sync.WaitGroup
errchan := make(chan error)
cAPI = newFakeRegistryForCommands(unitPrefix, len(r.units), false)
wg.Add(2)
go func() {
defer wg.Done()
doDestroyUnits(t, r, errchan)
}()
go func() {
defer wg.Done()
doDestroyUnits(t, r, errchan)
}()
go func() {
wg.Wait()
close(errchan)
}()
for err := range errchan {
t.Errorf(""%v"", err)
}
}
```
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: 4522498327e92ffe6fa24eaa087c73e5af4adb53
",0,coreos fleet fleetctl destroy test go loc found a possible issue in at below is the message reported by the analyzer for this snippet of code beware that the analyzer only reports the first issue it finds so please do not limit your consideration to the contents of the below message range loop variable r used in defer or goroutine at line click here to show the line s of go which triggered the analyzer go for r range results var wg sync waitgroup errchan make chan error capi newfakeregistryforcommands unitprefix len r units false wg add go func defer wg done dodestroyunits t r errchan go func defer wg done dodestroyunits t r errchan go func wg wait close errchan for err range errchan t errorf v err leave a reaction on this issue to contribute to the project by classifying this instance as a bug mitigated or desirable behavior rocket see the descriptions of the classifications for more information commit id ,0
11354,2649533147.0,IssuesEvent,2015-03-15 00:50:10,Badadroid/badadroid,https://api.github.com/repos/Badadroid/badadroid,closed,insufficient memory! (move ext2system.img),auto-migrated Priority-Medium Type-Defect,"```
can you move the ext2system.img to the sd card?
I can't change theme and run samsung apps application because i haven't enoght
space!
```
Original issue reported on code.google.com by `granzier...@gmail.com` on 14 Dec 2011 at 2:47",1.0,"insufficient memory! (move ext2system.img) - ```
can you move the ext2system.img to the sd card?
I can't change theme and run samsung apps application because i haven't enoght
space!
```
Original issue reported on code.google.com by `granzier...@gmail.com` on 14 Dec 2011 at 2:47",0,insufficient memory move img can you move the img to the sd card i can t change theme and run samsung apps application because i haven t enoght space original issue reported on code google com by granzier gmail com on dec at ,0
581,7986105716.0,IssuesEvent,2018-07-19 00:02:45,rust-lang-nursery/stdsimd,https://api.github.com/repos/rust-lang-nursery/stdsimd,closed,Portable vector shuffles,A-portable,"I've just submitted a PR with an API for portable vector shuffles (https://github.com/rust-lang-nursery/stdsimd/pull/387). From the docs:
```rust
// Shuffle allows reordering the elements of a vector:
let x = i32x4::new(1, 2, 3, 4);
let r = shuffle!(x, [2, 1, 3, 0]);
assert_eq!(r, i32x4::new(3, 2, 4, 1));
// The resulting vector can be smaller than the input:
let r = shuffle!(x, [1, 3]);
assert_eq!(r, i32x2::new(2, 4));
// Equal:
let r = shuffle!(x, [1, 3, 2, 0]);
assert_eq!(r, i32x4::new(2, 4, 3, 1));
// Or larger (at most twice as large):
et r = shuffle!(x, [1, 3, 2, 2, 1, 3, 2, 2]);
assert_eq!(r, i32x8::new(2, 4, 3, 3, 2, 4, 3, 3));
// It also allows reordering elements of two vectors:
let y = i32x4::new(5, 6, 7, 8);
let r = shuffle!(x, y, [4, 0, 5, 1]);
assert_eq!(r, i32x4::new(5, 1, 6, 2));
// And this can be used to construct larger or smaller
// vectors as well.
```
It would be nice to gather feed-back on this.",True,"Portable vector shuffles - I've just submitted a PR with an API for portable vector shuffles (https://github.com/rust-lang-nursery/stdsimd/pull/387). From the docs:
```rust
// Shuffle allows reordering the elements of a vector:
let x = i32x4::new(1, 2, 3, 4);
let r = shuffle!(x, [2, 1, 3, 0]);
assert_eq!(r, i32x4::new(3, 2, 4, 1));
// The resulting vector can be smaller than the input:
let r = shuffle!(x, [1, 3]);
assert_eq!(r, i32x2::new(2, 4));
// Equal:
let r = shuffle!(x, [1, 3, 2, 0]);
assert_eq!(r, i32x4::new(2, 4, 3, 1));
// Or larger (at most twice as large):
et r = shuffle!(x, [1, 3, 2, 2, 1, 3, 2, 2]);
assert_eq!(r, i32x8::new(2, 4, 3, 3, 2, 4, 3, 3));
// It also allows reordering elements of two vectors:
let y = i32x4::new(5, 6, 7, 8);
let r = shuffle!(x, y, [4, 0, 5, 1]);
assert_eq!(r, i32x4::new(5, 1, 6, 2));
// And this can be used to construct larger or smaller
// vectors as well.
```
It would be nice to gather feed-back on this.",1,portable vector shuffles i ve just submitted a pr with an api for portable vector shuffles from the docs rust shuffle allows reordering the elements of a vector let x new let r shuffle x assert eq r new the resulting vector can be smaller than the input let r shuffle x assert eq r new equal let r shuffle x assert eq r new or larger at most twice as large et r shuffle x assert eq r new it also allows reordering elements of two vectors let y new let r shuffle x y assert eq r new and this can be used to construct larger or smaller vectors as well it would be nice to gather feed back on this ,1
439920,12690494028.0,IssuesEvent,2020-06-21 12:27:06,buddyboss/buddyboss-platform,https://api.github.com/repos/buddyboss/buddyboss-platform,opened,Audio file preview for other formats,feature: enhancement priority: medium,"**Is your feature request related to a problem? Please describe.**
Audio file preview uploaded in documents only supports MP3, WAV, and OGG

**Describe alternatives you've considered**
Audio preview should be based on the MIME type
**Support ticket links**
https://secure.helpscout.net/conversation/1198744310/78453",1.0,"Audio file preview for other formats - **Is your feature request related to a problem? Please describe.**
Audio file preview uploaded in documents only supports MP3, WAV, and OGG

**Describe alternatives you've considered**
Audio preview should be based on the MIME type
**Support ticket links**
https://secure.helpscout.net/conversation/1198744310/78453",0,audio file preview for other formats is your feature request related to a problem please describe audio file preview uploaded in documents only supports wav and ogg describe alternatives you ve considered audio preview should be based on the mime type support ticket links ,0
778507,27318748458.0,IssuesEvent,2023-02-24 17:49:18,GoogleContainerTools/skaffold,https://api.github.com/repos/GoogleContainerTools/skaffold,closed,Not able to reference secrets path to home folder,kind/bug priority/p3,"
### Expected behavior
I want to reference a secret file in `build.artifacts.docker.secret.src` in the home directory. This works well if using the realpath but not ~.
### Actual behavior
Using the realpath /home/username/.npmrc works, but using ~/.npmrc doesn't work because skaffold is appending ~/.npmrc to the actual working directory making it search to the wrong place.
I also tried to reference the home directory using $HOME env variable, but it's not a templated field, so doesn't work
### Information
- Skaffold version: skaffold 2.1.0
- Operating system: MacOS Ventura 13
- Installed via: brew
- Contents of skaffold.yaml:
```yaml
build:
local:
useBuildkit: true
artifacts:
- image: image-name
context: .
docker:
secrets:
- id: npmrc
src: ~/.npmrc
```
",1.0,"Not able to reference secrets path to home folder -
### Expected behavior
I want to reference a secret file in `build.artifacts.docker.secret.src` in the home directory. This works well if using the realpath but not ~.
### Actual behavior
Using the realpath /home/username/.npmrc works, but using ~/.npmrc doesn't work because skaffold is appending ~/.npmrc to the actual working directory making it search to the wrong place.
I also tried to reference the home directory using $HOME env variable, but it's not a templated field, so doesn't work
### Information
- Skaffold version: skaffold 2.1.0
- Operating system: MacOS Ventura 13
- Installed via: brew
- Contents of skaffold.yaml:
```yaml
build:
local:
useBuildkit: true
artifacts:
- image: image-name
context: .
docker:
secrets:
- id: npmrc
src: ~/.npmrc
```
",0,not able to reference secrets path to home folder issues without logs and details are more complicated to fix please help us by filling the template below expected behavior i want to reference a secret file in build artifacts docker secret src in the home directory this works well if using the realpath but not actual behavior using the realpath home username npmrc works but using npmrc doesn t work because skaffold is appending npmrc to the actual working directory making it search to the wrong place i also tried to reference the home directory using home env variable but it s not a templated field so doesn t work information skaffold version skaffold operating system macos ventura installed via brew contents of skaffold yaml yaml build local usebuildkit true artifacts image image name context docker secrets id npmrc src npmrc ,0
639,8578486660.0,IssuesEvent,2018-11-13 05:20:00,chapel-lang/chapel,https://api.github.com/repos/chapel-lang/chapel,opened,Heterogeneous GASNET_DOMAIN_COUNT value causes hangs,area: Third-Party type: Portability,"Under gasnet-aries we set `GASNET_DOMAIN_COUNT` to ``. We've been seeing intermittent hangs on a heterogenous system, which we've narrowed down to a single job getting different node types. As a specific example a job on a 28-core BW and 68-core KNL will hang. However, not all combinations seem to hang as a job on 10-core IV and 20-core IV seems to work.
Here's an awful patch that I've been using to reproduce. It sets `GASNET_DOMAIN_COUNT` to `SLURM_NODEID+1`
```diff
diff --git a/runtime/src/comm/gasnet/comm-gasnet.c b/runtime/src/comm/gasnet/comm-gasnet.c
index 2b1e0c6f25..76971f6014 100644
--- a/runtime/src/comm/gasnet/comm-gasnet.c
+++ b/runtime/src/comm/gasnet/comm-gasnet.c
@@ -766,12 +766,13 @@ static void set_max_segsize() {
}
}
+#include ""chpl-env.h""
static void set_num_comm_domains() {
#if defined(GASNET_CONDUIT_GEMINI) || defined(GASNET_CONDUIT_ARIES)
char num_cpus_val[22]; // big enough for an unsigned 64-bit quantity
int num_cpus;
- num_cpus = chpl_topo_getNumCPUsPhysical(true) + 1;
+ num_cpus = chpl_env_str_to_int(""SLURM_NODEID"", getenv(""SLURM_NODEID""), 0) + 1;
snprintf(num_cpus_val, sizeof(num_cpus_val), ""%d"", num_cpus);
if (setenv(""GASNET_DOMAIN_COUNT"", num_cpus_val, 0) != 0) {
```
This fails for me with 2 or more locales.",True,"Heterogeneous GASNET_DOMAIN_COUNT value causes hangs - Under gasnet-aries we set `GASNET_DOMAIN_COUNT` to ``. We've been seeing intermittent hangs on a heterogenous system, which we've narrowed down to a single job getting different node types. As a specific example a job on a 28-core BW and 68-core KNL will hang. However, not all combinations seem to hang as a job on 10-core IV and 20-core IV seems to work.
Here's an awful patch that I've been using to reproduce. It sets `GASNET_DOMAIN_COUNT` to `SLURM_NODEID+1`
```diff
diff --git a/runtime/src/comm/gasnet/comm-gasnet.c b/runtime/src/comm/gasnet/comm-gasnet.c
index 2b1e0c6f25..76971f6014 100644
--- a/runtime/src/comm/gasnet/comm-gasnet.c
+++ b/runtime/src/comm/gasnet/comm-gasnet.c
@@ -766,12 +766,13 @@ static void set_max_segsize() {
}
}
+#include ""chpl-env.h""
static void set_num_comm_domains() {
#if defined(GASNET_CONDUIT_GEMINI) || defined(GASNET_CONDUIT_ARIES)
char num_cpus_val[22]; // big enough for an unsigned 64-bit quantity
int num_cpus;
- num_cpus = chpl_topo_getNumCPUsPhysical(true) + 1;
+ num_cpus = chpl_env_str_to_int(""SLURM_NODEID"", getenv(""SLURM_NODEID""), 0) + 1;
snprintf(num_cpus_val, sizeof(num_cpus_val), ""%d"", num_cpus);
if (setenv(""GASNET_DOMAIN_COUNT"", num_cpus_val, 0) != 0) {
```
This fails for me with 2 or more locales.",1,heterogeneous gasnet domain count value causes hangs under gasnet aries we set gasnet domain count to we ve been seeing intermittent hangs on a heterogenous system which we ve narrowed down to a single job getting different node types as a specific example a job on a core bw and core knl will hang however not all combinations seem to hang as a job on core iv and core iv seems to work here s an awful patch that i ve been using to reproduce it sets gasnet domain count to slurm nodeid diff diff git a runtime src comm gasnet comm gasnet c b runtime src comm gasnet comm gasnet c index a runtime src comm gasnet comm gasnet c b runtime src comm gasnet comm gasnet c static void set max segsize include chpl env h static void set num comm domains if defined gasnet conduit gemini defined gasnet conduit aries char num cpus val big enough for an unsigned bit quantity int num cpus num cpus chpl topo getnumcpusphysical true num cpus chpl env str to int slurm nodeid getenv slurm nodeid snprintf num cpus val sizeof num cpus val d num cpus if setenv gasnet domain count num cpus val this fails for me with or more locales ,1
589399,17695950223.0,IssuesEvent,2021-08-24 15:18:18,EclipseFdn/react-eclipsefdn-members,https://api.github.com/repos/EclipseFdn/react-eclipsefdn-members,closed,Add a field to record the type of organization. ,Front End Backend top-priority,"We need to add a field to record the type of organization.
This would be a mandatory drop down list and would appear before the Member Representative section.
The options would include:
- For Profit Organization
- Non-Profit Open Source Organization / User Group
- Academic Organization
- Standards Organization
- Government Organization, Government Agency, or NGO
- Publishing/Media Organization
- Research Institute
- All others
This information would also be useful to have as part of the summary received by the Membership Coordination team.
For reference, please see the screenshot attached.
Thanks,
Zahra

",1.0,"Add a field to record the type of organization. - We need to add a field to record the type of organization.
This would be a mandatory drop down list and would appear before the Member Representative section.
The options would include:
- For Profit Organization
- Non-Profit Open Source Organization / User Group
- Academic Organization
- Standards Organization
- Government Organization, Government Agency, or NGO
- Publishing/Media Organization
- Research Institute
- All others
This information would also be useful to have as part of the summary received by the Membership Coordination team.
For reference, please see the screenshot attached.
Thanks,
Zahra

",0,add a field to record the type of organization we need to add a field to record the type of organization this would be a mandatory drop down list and would appear before the member representative section the options would include for profit organization non profit open source organization user group academic organization standards organization government organization government agency or ngo publishing media organization research institute all others this information would also be useful to have as part of the summary received by the membership coordination team for reference please see the screenshot attached thanks zahra ,0
482,6971598878.0,IssuesEvent,2017-12-11 14:33:06,edenhill/librdkafka,https://api.github.com/repos/edenhill/librdkafka,closed,segfault with latest kafkacat/librdkafka and Kafka 0.11.0,bug portability,"I have built the latest `kafkacat`/`librdkafka` using kafkacat's `bootstrap.sh`.
I got:
```
Version 1.3.1-13-ga6b599 (JSON) (librdkafka 0.11.0-RC1 builtin.features=gzip,snappy,ssl,sasl,regex,lz4,sasl_gssapi,sasl_plain,sasl_scram,plugins)
```
I am using it contact Kafka brokers using Kerberos and SSL, with the -L option.
For brokers running 0.10.2, `kafkacat` works perfectly well, all the time.
Simply changing the broker queried to one running Kafka 0.11.0, I get segfault most of the time (more than 9 times out of 10).",True,"segfault with latest kafkacat/librdkafka and Kafka 0.11.0 - I have built the latest `kafkacat`/`librdkafka` using kafkacat's `bootstrap.sh`.
I got:
```
Version 1.3.1-13-ga6b599 (JSON) (librdkafka 0.11.0-RC1 builtin.features=gzip,snappy,ssl,sasl,regex,lz4,sasl_gssapi,sasl_plain,sasl_scram,plugins)
```
I am using it contact Kafka brokers using Kerberos and SSL, with the -L option.
For brokers running 0.10.2, `kafkacat` works perfectly well, all the time.
Simply changing the broker queried to one running Kafka 0.11.0, I get segfault most of the time (more than 9 times out of 10).",1,segfault with latest kafkacat librdkafka and kafka i have built the latest kafkacat librdkafka using kafkacat s bootstrap sh i got version json librdkafka builtin features gzip snappy ssl sasl regex sasl gssapi sasl plain sasl scram plugins i am using it contact kafka brokers using kerberos and ssl with the l option for brokers running kafkacat works perfectly well all the time simply changing the broker queried to one running kafka i get segfault most of the time more than times out of ,1
27406,21698978381.0,IssuesEvent,2022-05-10 00:24:38,celeritas-project/celeritas,https://api.github.com/repos/celeritas-project/celeritas,closed,Prototype performance portability,infrastructure,Do an initial port of enough core Celeritas components to at least run some demo applications on non-CUDA hardware.,1.0,Prototype performance portability - Do an initial port of enough core Celeritas components to at least run some demo applications on non-CUDA hardware.,0,prototype performance portability do an initial port of enough core celeritas components to at least run some demo applications on non cuda hardware ,0
1534,22157283056.0,IssuesEvent,2022-06-04 01:52:53,apache/beam,https://api.github.com/repos/apache/beam,opened,Convert external transforms to use StringUtf8Coder instead of ByteArrayCoder,portability P3 improvement sdk-java-core clarified sdk-py-core io-java-kafka,"We currently encode Strings using implicit UTF8 byte arrays. Now, we could use the newly introduced StringUtf8Coder ModelCoder.
Imported from Jira [BEAM-7244](https://issues.apache.org/jira/browse/BEAM-7244). Original Jira may contain additional context.
Reported by: mxm.",True,"Convert external transforms to use StringUtf8Coder instead of ByteArrayCoder - We currently encode Strings using implicit UTF8 byte arrays. Now, we could use the newly introduced StringUtf8Coder ModelCoder.
Imported from Jira [BEAM-7244](https://issues.apache.org/jira/browse/BEAM-7244). Original Jira may contain additional context.
Reported by: mxm.",1,convert external transforms to use instead of bytearraycoder we currently encode strings using implicit byte arrays now we could use the newly introduced modelcoder imported from jira original jira may contain additional context reported by mxm ,1
1810,26775172763.0,IssuesEvent,2023-01-31 16:38:31,alcionai/corso,https://api.github.com/repos/alcionai/corso,closed,GC: Add `Beta` Servicer feature,supportability,"- To complete Backup / Restore pipelines, a beta connector is required for Pages calls.
##### Related to:
- #2169
- #2071
- #2173",True,"GC: Add `Beta` Servicer feature - - To complete Backup / Restore pipelines, a beta connector is required for Pages calls.
##### Related to:
- #2169
- #2071
- #2173",1,gc add beta servicer feature to complete backup restore pipelines a beta connector is required for pages calls related to ,1
444698,12819520343.0,IssuesEvent,2020-07-06 02:28:09,PMEAL/OpenPNM,https://api.github.com/repos/PMEAL/OpenPNM,closed,Run check_network_health as part of _sanity_check,enhancement high priority,"While creating an example for permeability benchmark, I just realized that extracted networks could have disconnected pores, which leads to solver divergence, leaving the user clueless of the cause. I think it's essential that we check for network health prior to running the algorithm. This could be part of the recently added `_sanity_check` method.",1.0,"Run check_network_health as part of _sanity_check - While creating an example for permeability benchmark, I just realized that extracted networks could have disconnected pores, which leads to solver divergence, leaving the user clueless of the cause. I think it's essential that we check for network health prior to running the algorithm. This could be part of the recently added `_sanity_check` method.",0,run check network health as part of sanity check while creating an example for permeability benchmark i just realized that extracted networks could have disconnected pores which leads to solver divergence leaving the user clueless of the cause i think it s essential that we check for network health prior to running the algorithm this could be part of the recently added sanity check method ,0
82795,7852918061.0,IssuesEvent,2018-06-20 15:49:28,eclipse/openj9,https://api.github.com/repos/eclipse/openj9,closed,functional test migration follow-up issues,comp:test,"functional tests will be moved to `functional` folder in `openj9/test/`, Issue #1679 once this movement is done. We need some follow up PR to enhance this movement.
1. move `getdependency to functional level. `
2. make build_list more flexible - it can be target to each sub level of test folder.
3. update readme file to pointing to the right position and mention functional folder. ",1.0,"functional test migration follow-up issues - functional tests will be moved to `functional` folder in `openj9/test/`, Issue #1679 once this movement is done. We need some follow up PR to enhance this movement.
1. move `getdependency to functional level. `
2. make build_list more flexible - it can be target to each sub level of test folder.
3. update readme file to pointing to the right position and mention functional folder. ",0,functional test migration follow up issues functional tests will be moved to functional folder in test issue once this movement is done we need some follow up pr to enhance this movement move getdependency to functional level make build list more flexible it can be target to each sub level of test folder update readme file to pointing to the right position and mention functional folder ,0
229948,18460366955.0,IssuesEvent,2021-10-15 23:51:39,eclipse-openj9/openj9,https://api.github.com/repos/eclipse-openj9/openj9,closed,NPE running DefaultStaticInvokeTest,comp:jit test failure blocker segfault project:MH,"Latest occurrence at https://openj9-jenkins.osuosl.org/job/Test_openjdk17_j9_sanity.openjdk_x86-64_windows_Nightly/8/consoleFull
```
21:27:44 openjdk version ""17-internal"" 2021-09-14
21:27:44 OpenJDK Runtime Environment (build 17-internal+0-adhoc.****.buildjdk17x86-64windowsnightly)
21:27:44 Eclipse OpenJ9 VM (build v0.28.0-release-18bebe9f4e3, JRE 17 Windows Server 2012 R2 amd64-64-Bit Compressed References 20210829_8 (JIT enabled, AOT enabled)
21:27:44 OpenJ9 - 18bebe9f4e3
21:27:44 OMR - 1d0a329
21:27:44 JCL - 712145ee3f5 based on jdk-17+35)
21:51:08 test DefaultStaticInvokeTest.testMethodHandleInvoke(""TestClass7"", ""TestIF7.TestClass7""): failure
21:51:08 java.lang.NullPointerException: Cannot invoke ""java.lang.invoke.MethodHandle.linkToVirtual(java.lang.Object, java.lang.Object, java.lang.invoke.MemberName)"" because """" is null
21:51:08 at java.base/java.lang.invoke.LambdaForm$DMH/0x0000000000000000.invokeVirtual(LambdaForm$DMH)
21:51:08 at java.lang.invoke.LambdaForm$MH/0x0000000095dfc060.invoke(LambdaForm$MH)
21:51:08 at java.lang.invoke.LambdaForm$MH/0x0000000095ddf6f0.invoke_MT(LambdaForm$MH)
21:51:08 at DefaultStaticInvokeTest.testMethodHandleInvoke(DefaultStaticInvokeTest.java:169)
21:51:08 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
21:51:08 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
21:51:08 at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
21:51:08 at java.base/java.lang.reflect.Method.invoke(Method.java:568)
21:51:08 at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:132)
21:51:08 at org.testng.internal.TestInvoker.invokeMethod(TestInvoker.java:599)
21:51:08 at org.testng.internal.TestInvoker.invokeTestMethod(TestInvoker.java:174)
21:51:08 at org.testng.internal.MethodRunner.runInSequence(MethodRunner.java:46)
21:51:08 at org.testng.internal.TestInvoker$MethodInvocationAgent.invoke(TestInvoker.java:822)
21:51:08 at org.testng.internal.TestInvoker.invokeTestMethods(TestInvoker.java:147)
21:51:08 at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:146)
21:51:08 at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:128)
21:51:08 at org.testng.TestRunner$$Lambda$129/0x0000000096210428.accept(Unknown Source)
21:51:08 at java.base/java.util.ArrayList.forEach(ArrayList.java:1511)
21:51:08 at org.testng.TestRunner.privateRun(TestRunner.java:764)
21:51:08 at org.testng.TestRunner.run(TestRunner.java:585)
21:51:08 at org.testng.SuiteRunner.runTest(SuiteRunner.java:384)
21:51:08 at org.testng.SuiteRunner.runSequentially(SuiteRunner.java:378)
21:51:08 at org.testng.SuiteRunner.privateRun(SuiteRunner.java:337)
21:51:08 at org.testng.SuiteRunner.run(SuiteRunner.java:286)
21:51:08 at org.testng.SuiteRunnerWorker.runSuite(SuiteRunnerWorker.java:53)
21:51:08 at org.testng.SuiteRunnerWorker.run(SuiteRunnerWorker.java:96)
21:51:08 at org.testng.TestNG.runSuitesSequentially(TestNG.java:1218)
21:51:08 at org.testng.TestNG.runSuitesLocally(TestNG.java:1140)
21:51:08 at org.testng.TestNG.runSuites(TestNG.java:1069)
21:51:08 at org.testng.TestNG.run(TestNG.java:1037)
21:51:08 at com.sun.javatest.regtest.agent.TestNGRunner.main(TestNGRunner.java:94)
21:51:08 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
21:51:08 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
21:51:08 at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
21:51:08 at java.base/java.lang.reflect.Method.invoke(Method.java:568)
21:51:08 at com.sun.javatest.regtest.agent.MainActionHelper$AgentVMRunnable.run(MainActionHelper.java:312)
21:51:08 at java.base/java.lang.Thread.run(Thread.java:884)
21:51:08 ===============================================
21:51:08 java/lang/reflect/DefaultStaticTest/DefaultStaticInvokeTest.java
21:51:08 Total tests run: 276, Passes: 274, Failures: 2, Skips: 0
21:51:08 ===============================================
21:51:08
21:51:08 STDERR:
21:51:08 java.lang.Exception: failures: 2
21:51:08 at com.sun.javatest.regtest.agent.TestNGRunner.main(TestNGRunner.java:96)
21:51:08 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
21:51:08 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
21:51:08 at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
21:51:08 at java.base/java.lang.reflect.Method.invoke(Method.java:568)
21:51:08 at com.sun.javatest.regtest.agent.MainActionHelper$AgentVMRunnable.run(MainActionHelper.java:312)
21:51:08 at java.base/java.lang.Thread.run(Thread.java:884)
21:51:08
21:51:08 JavaTest Message: Test threw exception: java.lang.Exception
21:51:08 JavaTest Message: shutting down test
21:51:08
21:51:08
21:51:08 TEST RESULT: Failed. Execution failed: `main' threw exception: java.lang.Exception: failures: 2
21:51:08 --------------------------------------------------
21:54:09 Test results: passed: 766; failed: 1
21:54:10 Report written to F:\Users\jenkins\workspace\Test_openjdk17_j9_sanity.openjdk_x86-64_windows_Nightly\jvmtest\openjdk\report\html\report.html
21:54:10 Results written to F:\Users\jenkins\workspace\Test_openjdk17_j9_sanity.openjdk_x86-64_windows_Nightly\aqa-tests\TKG\output_16302870106882\jdk_lang_1\work
21:54:10 Error: Some tests failed or other problems occurred.
21:54:10
21:54:10 jdk_lang_1_FAILED
```",1.0,"NPE running DefaultStaticInvokeTest - Latest occurrence at https://openj9-jenkins.osuosl.org/job/Test_openjdk17_j9_sanity.openjdk_x86-64_windows_Nightly/8/consoleFull
```
21:27:44 openjdk version ""17-internal"" 2021-09-14
21:27:44 OpenJDK Runtime Environment (build 17-internal+0-adhoc.****.buildjdk17x86-64windowsnightly)
21:27:44 Eclipse OpenJ9 VM (build v0.28.0-release-18bebe9f4e3, JRE 17 Windows Server 2012 R2 amd64-64-Bit Compressed References 20210829_8 (JIT enabled, AOT enabled)
21:27:44 OpenJ9 - 18bebe9f4e3
21:27:44 OMR - 1d0a329
21:27:44 JCL - 712145ee3f5 based on jdk-17+35)
21:51:08 test DefaultStaticInvokeTest.testMethodHandleInvoke(""TestClass7"", ""TestIF7.TestClass7""): failure
21:51:08 java.lang.NullPointerException: Cannot invoke ""java.lang.invoke.MethodHandle.linkToVirtual(java.lang.Object, java.lang.Object, java.lang.invoke.MemberName)"" because """" is null
21:51:08 at java.base/java.lang.invoke.LambdaForm$DMH/0x0000000000000000.invokeVirtual(LambdaForm$DMH)
21:51:08 at java.lang.invoke.LambdaForm$MH/0x0000000095dfc060.invoke(LambdaForm$MH)
21:51:08 at java.lang.invoke.LambdaForm$MH/0x0000000095ddf6f0.invoke_MT(LambdaForm$MH)
21:51:08 at DefaultStaticInvokeTest.testMethodHandleInvoke(DefaultStaticInvokeTest.java:169)
21:51:08 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
21:51:08 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
21:51:08 at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
21:51:08 at java.base/java.lang.reflect.Method.invoke(Method.java:568)
21:51:08 at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:132)
21:51:08 at org.testng.internal.TestInvoker.invokeMethod(TestInvoker.java:599)
21:51:08 at org.testng.internal.TestInvoker.invokeTestMethod(TestInvoker.java:174)
21:51:08 at org.testng.internal.MethodRunner.runInSequence(MethodRunner.java:46)
21:51:08 at org.testng.internal.TestInvoker$MethodInvocationAgent.invoke(TestInvoker.java:822)
21:51:08 at org.testng.internal.TestInvoker.invokeTestMethods(TestInvoker.java:147)
21:51:08 at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:146)
21:51:08 at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:128)
21:51:08 at org.testng.TestRunner$$Lambda$129/0x0000000096210428.accept(Unknown Source)
21:51:08 at java.base/java.util.ArrayList.forEach(ArrayList.java:1511)
21:51:08 at org.testng.TestRunner.privateRun(TestRunner.java:764)
21:51:08 at org.testng.TestRunner.run(TestRunner.java:585)
21:51:08 at org.testng.SuiteRunner.runTest(SuiteRunner.java:384)
21:51:08 at org.testng.SuiteRunner.runSequentially(SuiteRunner.java:378)
21:51:08 at org.testng.SuiteRunner.privateRun(SuiteRunner.java:337)
21:51:08 at org.testng.SuiteRunner.run(SuiteRunner.java:286)
21:51:08 at org.testng.SuiteRunnerWorker.runSuite(SuiteRunnerWorker.java:53)
21:51:08 at org.testng.SuiteRunnerWorker.run(SuiteRunnerWorker.java:96)
21:51:08 at org.testng.TestNG.runSuitesSequentially(TestNG.java:1218)
21:51:08 at org.testng.TestNG.runSuitesLocally(TestNG.java:1140)
21:51:08 at org.testng.TestNG.runSuites(TestNG.java:1069)
21:51:08 at org.testng.TestNG.run(TestNG.java:1037)
21:51:08 at com.sun.javatest.regtest.agent.TestNGRunner.main(TestNGRunner.java:94)
21:51:08 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
21:51:08 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
21:51:08 at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
21:51:08 at java.base/java.lang.reflect.Method.invoke(Method.java:568)
21:51:08 at com.sun.javatest.regtest.agent.MainActionHelper$AgentVMRunnable.run(MainActionHelper.java:312)
21:51:08 at java.base/java.lang.Thread.run(Thread.java:884)
21:51:08 ===============================================
21:51:08 java/lang/reflect/DefaultStaticTest/DefaultStaticInvokeTest.java
21:51:08 Total tests run: 276, Passes: 274, Failures: 2, Skips: 0
21:51:08 ===============================================
21:51:08
21:51:08 STDERR:
21:51:08 java.lang.Exception: failures: 2
21:51:08 at com.sun.javatest.regtest.agent.TestNGRunner.main(TestNGRunner.java:96)
21:51:08 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
21:51:08 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
21:51:08 at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
21:51:08 at java.base/java.lang.reflect.Method.invoke(Method.java:568)
21:51:08 at com.sun.javatest.regtest.agent.MainActionHelper$AgentVMRunnable.run(MainActionHelper.java:312)
21:51:08 at java.base/java.lang.Thread.run(Thread.java:884)
21:51:08
21:51:08 JavaTest Message: Test threw exception: java.lang.Exception
21:51:08 JavaTest Message: shutting down test
21:51:08
21:51:08
21:51:08 TEST RESULT: Failed. Execution failed: `main' threw exception: java.lang.Exception: failures: 2
21:51:08 --------------------------------------------------
21:54:09 Test results: passed: 766; failed: 1
21:54:10 Report written to F:\Users\jenkins\workspace\Test_openjdk17_j9_sanity.openjdk_x86-64_windows_Nightly\jvmtest\openjdk\report\html\report.html
21:54:10 Results written to F:\Users\jenkins\workspace\Test_openjdk17_j9_sanity.openjdk_x86-64_windows_Nightly\aqa-tests\TKG\output_16302870106882\jdk_lang_1\work
21:54:10 Error: Some tests failed or other problems occurred.
21:54:10
21:54:10 jdk_lang_1_FAILED
```",0,npe running defaultstaticinvoketest latest occurrence at openjdk version internal openjdk runtime environment build internal adhoc eclipse vm build release jre windows server bit compressed references jit enabled aot enabled omr jcl based on jdk test defaultstaticinvoketest testmethodhandleinvoke failure java lang nullpointerexception cannot invoke java lang invoke methodhandle linktovirtual java lang object java lang object java lang invoke membername because is null at java base java lang invoke lambdaform dmh invokevirtual lambdaform dmh at java lang invoke lambdaform mh invoke lambdaform mh at java lang invoke lambdaform mh invoke mt lambdaform mh at defaultstaticinvoketest testmethodhandleinvoke defaultstaticinvoketest java at java base jdk internal reflect nativemethodaccessorimpl native method at java base jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at java base jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java base java lang reflect method invoke method java at org testng internal methodinvocationhelper invokemethod methodinvocationhelper java at org testng internal testinvoker invokemethod testinvoker java at org testng internal testinvoker invoketestmethod testinvoker java at org testng internal methodrunner runinsequence methodrunner java at org testng internal testinvoker methodinvocationagent invoke testinvoker java at org testng internal testinvoker invoketestmethods testinvoker java at org testng internal testmethodworker invoketestmethods testmethodworker java at org testng internal testmethodworker run testmethodworker java at org testng testrunner lambda accept unknown source at java base java util arraylist foreach arraylist java at org testng testrunner privaterun testrunner java at org testng testrunner run testrunner java at org testng suiterunner runtest suiterunner java at org testng suiterunner runsequentially suiterunner java at org testng suiterunner privaterun suiterunner java at org testng suiterunner run suiterunner java at org testng suiterunnerworker runsuite suiterunnerworker java at org testng suiterunnerworker run suiterunnerworker java at org testng testng runsuitessequentially testng java at org testng testng runsuiteslocally testng java at org testng testng runsuites testng java at org testng testng run testng java at com sun javatest regtest agent testngrunner main testngrunner java at java base jdk internal reflect nativemethodaccessorimpl native method at java base jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at java base jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java base java lang reflect method invoke method java at com sun javatest regtest agent mainactionhelper agentvmrunnable run mainactionhelper java at java base java lang thread run thread java java lang reflect defaultstatictest defaultstaticinvoketest java total tests run passes failures skips stderr java lang exception failures at com sun javatest regtest agent testngrunner main testngrunner java at java base jdk internal reflect nativemethodaccessorimpl native method at java base jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at java base jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java base java lang reflect method invoke method java at com sun javatest regtest agent mainactionhelper agentvmrunnable run mainactionhelper java at java base java lang thread run thread java javatest message test threw exception java lang exception javatest message shutting down test test result failed execution failed main threw exception java lang exception failures test results passed failed report written to f users jenkins workspace test sanity openjdk windows nightly jvmtest openjdk report html report html results written to f users jenkins workspace test sanity openjdk windows nightly aqa tests tkg output jdk lang work error some tests failed or other problems occurred jdk lang failed ,0
1795,26485588137.0,IssuesEvent,2023-01-17 17:44:18,elastic/kibana,https://api.github.com/repos/elastic/kibana,closed,[Portable Dashboards][Controls] Build API Examples,Feature:Input Control Team:Presentation loe:week impact:high Project:Controls Project:Portable Dashboard,"### Why do we need examples?
Currently, the Control Group has the potential to be the unified, user friendly querying system used in many solutions pages. This is blocked by the fact that the only existing usage of the Controls API happens inside the dashboard, which uses a non-preferred creation and syncing method that makes implementing Controls look far more complicated than it is in reality. This could scare off potential implementations.
To showcase the ease of use of Controls, and their excellent DX, we should build a Control Group Examples plugin.
### Examples of examples?
This Control Group example plugin could potentially contain examples of:
- An empty control group in edit mode, where the user is capable of choosing a data view and adding controls via the editor. This may require an add control button, with a configuration object that could be passed in via the `getCreationOptions` callback. (See https://github.com/elastic/kibana/issues/145429 for more info)
- A control group with some preconfigured Controls which use the factory / builder pattern.
- A control group which is set up to listen to filters from a unified search implementation, and output its own filters. A read only JSON editor underneath could display the final, combined filters.
- A control group with its state backed up into the URL. any selections made in a control should show up in the URL
- _Optionally_ - A control group with a new type of add button configured with an array of different controls which could be added and removed by the user with one click.
These are just some ideas, any other examples are welcome! In this process, it is likely that things will need to change on the Control Group embeddable side, which is all fair game!
### Documentation
Additionally, the PR that closes this issue could write more detailed API documentation, and explanations above each example.
### Question:
With the Control Group covered nicely in the example plugin, do we need to continue maintaining the Controls storybooks?
",True,"[Portable Dashboards][Controls] Build API Examples - ### Why do we need examples?
Currently, the Control Group has the potential to be the unified, user friendly querying system used in many solutions pages. This is blocked by the fact that the only existing usage of the Controls API happens inside the dashboard, which uses a non-preferred creation and syncing method that makes implementing Controls look far more complicated than it is in reality. This could scare off potential implementations.
To showcase the ease of use of Controls, and their excellent DX, we should build a Control Group Examples plugin.
### Examples of examples?
This Control Group example plugin could potentially contain examples of:
- An empty control group in edit mode, where the user is capable of choosing a data view and adding controls via the editor. This may require an add control button, with a configuration object that could be passed in via the `getCreationOptions` callback. (See https://github.com/elastic/kibana/issues/145429 for more info)
- A control group with some preconfigured Controls which use the factory / builder pattern.
- A control group which is set up to listen to filters from a unified search implementation, and output its own filters. A read only JSON editor underneath could display the final, combined filters.
- A control group with its state backed up into the URL. any selections made in a control should show up in the URL
- _Optionally_ - A control group with a new type of add button configured with an array of different controls which could be added and removed by the user with one click.
These are just some ideas, any other examples are welcome! In this process, it is likely that things will need to change on the Control Group embeddable side, which is all fair game!
### Documentation
Additionally, the PR that closes this issue could write more detailed API documentation, and explanations above each example.
### Question:
With the Control Group covered nicely in the example plugin, do we need to continue maintaining the Controls storybooks?
",1, build api examples why do we need examples currently the control group has the potential to be the unified user friendly querying system used in many solutions pages this is blocked by the fact that the only existing usage of the controls api happens inside the dashboard which uses a non preferred creation and syncing method that makes implementing controls look far more complicated than it is in reality this could scare off potential implementations to showcase the ease of use of controls and their excellent dx we should build a control group examples plugin examples of examples this control group example plugin could potentially contain examples of an empty control group in edit mode where the user is capable of choosing a data view and adding controls via the editor this may require an add control button with a configuration object that could be passed in via the getcreationoptions callback see for more info a control group with some preconfigured controls which use the factory builder pattern a control group which is set up to listen to filters from a unified search implementation and output its own filters a read only json editor underneath could display the final combined filters a control group with its state backed up into the url any selections made in a control should show up in the url optionally a control group with a new type of add button configured with an array of different controls which could be added and removed by the user with one click these are just some ideas any other examples are welcome in this process it is likely that things will need to change on the control group embeddable side which is all fair game documentation additionally the pr that closes this issue could write more detailed api documentation and explanations above each example question with the control group covered nicely in the example plugin do we need to continue maintaining the controls storybooks ,1
1703,24737459895.0,IssuesEvent,2022-10-20 23:54:24,verilator/verilator,https://api.github.com/repos/verilator/verilator,closed,Verilated objects fails to build when using Macos Monterey,area: portability,"Thanks for taking the time to report this.
# Can you attach an example that shows the issue? (Must be openly licensed, ideally in test_regress format.)
The code is at https://github.com/tianrui-wei/verilator-reproduce
Both examples are from using verilator --timing
## when using the system bundled clang compiler
`verilator --timing --cc --exe half_adder_tb.v half_adder.v`
` make -C obj_dir -f Vhalf_adder_tb.mk CC=clang CXX='clang++ -std=c++20 -fcoroutines-ts'`
Gives me the error
```
In file included from /usr/local/share/verilator/include/verilated_timing.cpp:23:
/usr/local/share/verilator/include/verilated_timing.h:52:12: fatal error: 'coroutine' file not found
# include
^~~~~~~~~~~
1 error generated.
```
## When using the homebrew llvm version and the bundled libc++ library
`verilator --timing --cc --exe half_adder_tb.v half_adder.v`
`make -C obj_dir -f Vhalf_adder_tb.mk CC=clang CXX='clang++ -std=c++20`
Gives me the error
```
In file included from /usr/local/share/verilator/include/verilated_timing.cpp:23:
/usr/local/share/verilator/include/verilated_timing.h:98:10: error: no template named 'coroutine_handle' in namespace 'std'
std::coroutine_handle<> m_coro; // The wrapped coroutine handle
~~~~~^
/usr/local/share/verilator/include/verilated_timing.h:106:28: error: no template named 'coroutine_handle' in namespace 'std'
VlCoroutineHandle(std::coroutine_handle<> coro, VlFileLineDebug fileline)
~~~~~^
/usr/local/share/verilator/include/verilated_timing.h:183:37: error: no template named 'coroutine_handle' in namespace 'std'
void await_suspend(std::coroutine_handle<> coro) {
~~~~~^
/usr/local/share/verilator/include/verilated_timing.h:184:23: error: no matching member function for call to 'push_back'
queue.push_back({delay, VlCoroutineHandle{coro, fileline}});
~~~~~~^~~~~~~~~
/opt/homebrew/opt/llvm/bin/../include/c++/v1/vector:582:66: note: candidate function not viable: cannot convert initializer list argument to 'const std::vector::value_type' (aka 'const VlDelayScheduler::VlDelayedCoroutine')
_LIBCPP_CONSTEXPR_AFTER_CXX17 _LIBCPP_INLINE_VISIBILITY void push_back(const_reference __x);
^
/opt/homebrew/opt/llvm/bin/../include/c++/v1/vector:584:66: note: candidate function not viable: cannot convert initializer list argument to 'std::vector::value_type' (aka 'VlDelayScheduler::VlDelayedCoroutine')
_LIBCPP_CONSTEXPR_AFTER_CXX17 _LIBCPP_INLINE_VISIBILITY void push_back(value_type&& __x);
^
In file included from /usr/local/share/verilator/include/verilated_timing.cpp:23:
/usr/local/share/verilator/include/verilated_timing.h:231:37: error: no template named 'coroutine_handle' in namespace 'std'
void await_suspend(std::coroutine_handle<> coro) {
~~~~~^
/usr/local/share/verilator/include/verilated_timing.h:246:29: error: no template named 'coroutine_handle' in namespace 'std'
bool await_suspend(std::coroutine_handle<>) const { return false; } // Resume immediately
~~~~~^
/usr/local/share/verilator/include/verilated_timing.h:256:29: error: no template named 'coroutine_handle' in namespace 'std'
void await_suspend(std::coroutine_handle<> coro) const { coro.destroy(); }
~~~~~^
/usr/local/share/verilator/include/verilated_timing.h:290:37: error: no template named 'coroutine_handle' in namespace 'std'
void await_suspend(std::coroutine_handle<> coro) { join->m_susp = {coro, fileline}; }
~~~~~^
/usr/local/share/verilator/include/verilated_timing.h:305:14: error: no template named 'coroutine_handle' in namespace 'std'
std::coroutine_handle<> m_continuation; // Coroutine to resume after this one finishes
~~~~~^
/usr/local/share/verilator/include/verilated_timing.h:313:14: error: no type named 'suspend_never' in namespace 'std'
std::suspend_never initial_suspend() const { return {}; }
~~~~~^
/usr/local/share/verilator/include/verilated_timing.h:317:14: error: no type named 'suspend_never' in namespace 'std'
std::suspend_never final_suspend() noexcept;
~~~~~^
/usr/local/share/verilator/include/verilated_timing.h:350:29: error: no template named 'coroutine_handle' in namespace 'std'
void await_suspend(std::coroutine_handle<> coro) { m_promisep->m_continuation = coro; }
~~~~~^
/usr/local/share/verilator/include/verilated_timing.cpp:161:6: error: no type named 'suspend_never' in namespace 'std'
std::suspend_never VlCoroutine::VlPromise::final_suspend() noexcept {
~~~~~^
13 errors generated.
```
# What 'verilator --version' are you using? Did you try it with the git master version?
- Verilator 5.001 devel rev v4.228-148-g8dacbdec3
- This is the git master version
# What OS and distribution are you using?
M1 Pro Macbook Pro, Macos Monterey 12.6
# May we assist you in trying to fix this in Verilator yourself?
Yes that'd be great",True,"Verilated objects fails to build when using Macos Monterey - Thanks for taking the time to report this.
# Can you attach an example that shows the issue? (Must be openly licensed, ideally in test_regress format.)
The code is at https://github.com/tianrui-wei/verilator-reproduce
Both examples are from using verilator --timing
## when using the system bundled clang compiler
`verilator --timing --cc --exe half_adder_tb.v half_adder.v`
` make -C obj_dir -f Vhalf_adder_tb.mk CC=clang CXX='clang++ -std=c++20 -fcoroutines-ts'`
Gives me the error
```
In file included from /usr/local/share/verilator/include/verilated_timing.cpp:23:
/usr/local/share/verilator/include/verilated_timing.h:52:12: fatal error: 'coroutine' file not found
# include
^~~~~~~~~~~
1 error generated.
```
## When using the homebrew llvm version and the bundled libc++ library
`verilator --timing --cc --exe half_adder_tb.v half_adder.v`
`make -C obj_dir -f Vhalf_adder_tb.mk CC=clang CXX='clang++ -std=c++20`
Gives me the error
```
In file included from /usr/local/share/verilator/include/verilated_timing.cpp:23:
/usr/local/share/verilator/include/verilated_timing.h:98:10: error: no template named 'coroutine_handle' in namespace 'std'
std::coroutine_handle<> m_coro; // The wrapped coroutine handle
~~~~~^
/usr/local/share/verilator/include/verilated_timing.h:106:28: error: no template named 'coroutine_handle' in namespace 'std'
VlCoroutineHandle(std::coroutine_handle<> coro, VlFileLineDebug fileline)
~~~~~^
/usr/local/share/verilator/include/verilated_timing.h:183:37: error: no template named 'coroutine_handle' in namespace 'std'
void await_suspend(std::coroutine_handle<> coro) {
~~~~~^
/usr/local/share/verilator/include/verilated_timing.h:184:23: error: no matching member function for call to 'push_back'
queue.push_back({delay, VlCoroutineHandle{coro, fileline}});
~~~~~~^~~~~~~~~
/opt/homebrew/opt/llvm/bin/../include/c++/v1/vector:582:66: note: candidate function not viable: cannot convert initializer list argument to 'const std::vector::value_type' (aka 'const VlDelayScheduler::VlDelayedCoroutine')
_LIBCPP_CONSTEXPR_AFTER_CXX17 _LIBCPP_INLINE_VISIBILITY void push_back(const_reference __x);
^
/opt/homebrew/opt/llvm/bin/../include/c++/v1/vector:584:66: note: candidate function not viable: cannot convert initializer list argument to 'std::vector::value_type' (aka 'VlDelayScheduler::VlDelayedCoroutine')
_LIBCPP_CONSTEXPR_AFTER_CXX17 _LIBCPP_INLINE_VISIBILITY void push_back(value_type&& __x);
^
In file included from /usr/local/share/verilator/include/verilated_timing.cpp:23:
/usr/local/share/verilator/include/verilated_timing.h:231:37: error: no template named 'coroutine_handle' in namespace 'std'
void await_suspend(std::coroutine_handle<> coro) {
~~~~~^
/usr/local/share/verilator/include/verilated_timing.h:246:29: error: no template named 'coroutine_handle' in namespace 'std'
bool await_suspend(std::coroutine_handle<>) const { return false; } // Resume immediately
~~~~~^
/usr/local/share/verilator/include/verilated_timing.h:256:29: error: no template named 'coroutine_handle' in namespace 'std'
void await_suspend(std::coroutine_handle<> coro) const { coro.destroy(); }
~~~~~^
/usr/local/share/verilator/include/verilated_timing.h:290:37: error: no template named 'coroutine_handle' in namespace 'std'
void await_suspend(std::coroutine_handle<> coro) { join->m_susp = {coro, fileline}; }
~~~~~^
/usr/local/share/verilator/include/verilated_timing.h:305:14: error: no template named 'coroutine_handle' in namespace 'std'
std::coroutine_handle<> m_continuation; // Coroutine to resume after this one finishes
~~~~~^
/usr/local/share/verilator/include/verilated_timing.h:313:14: error: no type named 'suspend_never' in namespace 'std'
std::suspend_never initial_suspend() const { return {}; }
~~~~~^
/usr/local/share/verilator/include/verilated_timing.h:317:14: error: no type named 'suspend_never' in namespace 'std'
std::suspend_never final_suspend() noexcept;
~~~~~^
/usr/local/share/verilator/include/verilated_timing.h:350:29: error: no template named 'coroutine_handle' in namespace 'std'
void await_suspend(std::coroutine_handle<> coro) { m_promisep->m_continuation = coro; }
~~~~~^
/usr/local/share/verilator/include/verilated_timing.cpp:161:6: error: no type named 'suspend_never' in namespace 'std'
std::suspend_never VlCoroutine::VlPromise::final_suspend() noexcept {
~~~~~^
13 errors generated.
```
# What 'verilator --version' are you using? Did you try it with the git master version?
- Verilator 5.001 devel rev v4.228-148-g8dacbdec3
- This is the git master version
# What OS and distribution are you using?
M1 Pro Macbook Pro, Macos Monterey 12.6
# May we assist you in trying to fix this in Verilator yourself?
Yes that'd be great",1,verilated objects fails to build when using macos monterey thanks for taking the time to report this can you attach an example that shows the issue must be openly licensed ideally in test regress format the code is at both examples are from using verilator timing when using the system bundled clang compiler verilator timing cc exe half adder tb v half adder v make c obj dir f vhalf adder tb mk cc clang cxx clang std c fcoroutines ts gives me the error in file included from usr local share verilator include verilated timing cpp usr local share verilator include verilated timing h fatal error coroutine file not found include error generated when using the homebrew llvm version and the bundled libc library verilator timing cc exe half adder tb v half adder v make c obj dir f vhalf adder tb mk cc clang cxx clang std c gives me the error in file included from usr local share verilator include verilated timing cpp usr local share verilator include verilated timing h error no template named coroutine handle in namespace std std coroutine handle m coro the wrapped coroutine handle usr local share verilator include verilated timing h error no template named coroutine handle in namespace std vlcoroutinehandle std coroutine handle coro vlfilelinedebug fileline usr local share verilator include verilated timing h error no template named coroutine handle in namespace std void await suspend std coroutine handle coro usr local share verilator include verilated timing h error no matching member function for call to push back queue push back delay vlcoroutinehandle coro fileline opt homebrew opt llvm bin include c vector note candidate function not viable cannot convert initializer list argument to const std vector value type aka const vldelayscheduler vldelayedcoroutine libcpp constexpr after libcpp inline visibility void push back const reference x opt homebrew opt llvm bin include c vector note candidate function not viable cannot convert initializer list argument to std vector value type aka vldelayscheduler vldelayedcoroutine libcpp constexpr after libcpp inline visibility void push back value type x in file included from usr local share verilator include verilated timing cpp usr local share verilator include verilated timing h error no template named coroutine handle in namespace std void await suspend std coroutine handle coro usr local share verilator include verilated timing h error no template named coroutine handle in namespace std bool await suspend std coroutine handle const return false resume immediately usr local share verilator include verilated timing h error no template named coroutine handle in namespace std void await suspend std coroutine handle coro const coro destroy usr local share verilator include verilated timing h error no template named coroutine handle in namespace std void await suspend std coroutine handle coro join m susp coro fileline usr local share verilator include verilated timing h error no template named coroutine handle in namespace std std coroutine handle m continuation coroutine to resume after this one finishes usr local share verilator include verilated timing h error no type named suspend never in namespace std std suspend never initial suspend const return usr local share verilator include verilated timing h error no type named suspend never in namespace std std suspend never final suspend noexcept usr local share verilator include verilated timing h error no template named coroutine handle in namespace std void await suspend std coroutine handle coro m promisep m continuation coro usr local share verilator include verilated timing cpp error no type named suspend never in namespace std std suspend never vlcoroutine vlpromise final suspend noexcept errors generated what verilator version are you using did you try it with the git master version verilator devel rev this is the git master version what os and distribution are you using pro macbook pro macos monterey may we assist you in trying to fix this in verilator yourself yes that d be great,1
1370,19671483285.0,IssuesEvent,2022-01-11 07:52:08,internet2-org/rust-aluvm,https://api.github.com/repos/internet2-org/rust-aluvm,closed,`CoreRegs::default()` causes stack overflow,bug *portability* upstream,"## Current Behavior
```rust
#[test]
fn my_test() { let _register = CoreRegs::default(); }
```
fails with
```
thread 'isa::exec::tests::my_test' has overflowed its stack
fatal runtime error: stack overflow
```
## Cause
https://github.com/internet2-org/rust-aluvm/blob/a7e64abd4a7058bf9f06886452147810b4a5cf6d/src/reg/core_regs.rs#L99
With `Box<[LibSite; CALL_STACK_SIZE]>`, the value would be allocated to stack first, but `CALL_STACK_SIZE = 1 << 16` is too big, resuting in overflow.
## Possible Solutions
Use `Vec` instead.
## Related Issues
https://github.com/rust-lang/rust/issues/53827",True,"`CoreRegs::default()` causes stack overflow - ## Current Behavior
```rust
#[test]
fn my_test() { let _register = CoreRegs::default(); }
```
fails with
```
thread 'isa::exec::tests::my_test' has overflowed its stack
fatal runtime error: stack overflow
```
## Cause
https://github.com/internet2-org/rust-aluvm/blob/a7e64abd4a7058bf9f06886452147810b4a5cf6d/src/reg/core_regs.rs#L99
With `Box<[LibSite; CALL_STACK_SIZE]>`, the value would be allocated to stack first, but `CALL_STACK_SIZE = 1 << 16` is too big, resuting in overflow.
## Possible Solutions
Use `Vec` instead.
## Related Issues
https://github.com/rust-lang/rust/issues/53827",1, coreregs default causes stack overflow current behavior rust fn my test let register coreregs default fails with thread isa exec tests my test has overflowed its stack fatal runtime error stack overflow cause with box the value would be allocated to stack first but call stack size is too big resuting in overflow possible solutions use vec instead related issues ,1
416119,12139743413.0,IssuesEvent,2020-04-23 19:21:36,hyphacoop/organizing,https://api.github.com/repos/hyphacoop/organizing,opened,Host design jam x2 to refine internal look and feel,[priority-★★☆] look&feel wg:business-planning,"_This initial comment is collaborative and open to modification by all._
## Task Summary
🎟️ **Re-ticketed from:** #
🗣 **Loomio:** N/A
📅 **Due date:** N/A
🎯 **Success criteria:** Have developed look and feel for our internal spaces.
Building on #77 and coming out of convo in `1010-04-23 bizdev call`, we wanted to carry the visual look through handbook and chat
## To Do
- [ ] have discussion abt current icons in chat and look and feel
- [ ] jam on ideas for handbook + chat
- ...
",1.0,"Host design jam x2 to refine internal look and feel - _This initial comment is collaborative and open to modification by all._
## Task Summary
🎟️ **Re-ticketed from:** #
🗣 **Loomio:** N/A
📅 **Due date:** N/A
🎯 **Success criteria:** Have developed look and feel for our internal spaces.
Building on #77 and coming out of convo in `1010-04-23 bizdev call`, we wanted to carry the visual look through handbook and chat
## To Do
- [ ] have discussion abt current icons in chat and look and feel
- [ ] jam on ideas for handbook + chat
- ...
",0,host design jam to refine internal look and feel this initial comment is collaborative and open to modification by all task summary 🎟️ re ticketed from 🗣 loomio n a 📅 due date n a 🎯 success criteria have developed look and feel for our internal spaces building on and coming out of convo in bizdev call we wanted to carry the visual look through handbook and chat to do have discussion abt current icons in chat and look and feel jam on ideas for handbook chat ,0
6640,2816057729.0,IssuesEvent,2015-05-19 09:25:33,osakagamba/7GIQSKRNE5P3AVTZCEFXE3ON,https://api.github.com/repos/osakagamba/7GIQSKRNE5P3AVTZCEFXE3ON,closed,Ulhdu0ODZyUoazB2v+QfcYqgc5+Bmg032fATGx0Ms5Qhp9nW2ewqjCFeXL0pdDmlvboaOhnX8pSgbU0yjnFw6YwNNLGISnRzwsSTVTSbmJFzqdo3O4kuMkAoZhmTYGW9qBp/npy8ZUVRpbm6um+rZfI+t9eTV4e8dj3FsuBwquk=,design,aedPphIq7MUjcFjJ8UwpFn9IB4NtMEB7/fM4BldbSqGN0mDCMBB4GYdRmfm8jlv5KBfPgT6eVnTzb4bz3mGZlxyoVExtzmd20XT2lWxv9C67Vb14KsJpCDT0w50wniaOW2fru6oBJsJDh8wkhFOUW9Y48l+w09qDCH9IfI1ugKpXoIlfDEfwpjbvIYwXxvdFiJ6uew2IMJPKkw7CzIxM2VShxWdSr7qQEdJwji3hE7i7vurEBdqfI4E03B8hcq4XsJpmWMsJn9RmsDAgYwyYM8dT6drTKnOZq0YOVNNpN7dMD/RCk/5TCsYc+lELU6JSvbQlD3MgvnLlElDA80hJEH+fShA+cU8VW9CSc7OivKVHKGr7EUeEZy0Z2a4U9h9riaCAW+MqVrjWetKerSXLLrsz6NHOiCccYfGw/cR9FbhLXiVKOmBqybQNihkCfgnNb02D9Nt20B/HYdvi1JL5ycXfv3vQixtrZCBh3+E/ey+0Cgy2h9QzFR/16O6i1yPL0wRlbw0hw6u4u4//U/QBAtSb2865Nyob6jkOaTmPtNxmz6WzP6gu+so/XPVVTbTM5uvoiMolvo6So6x3eoFyrv8e+GtCo4DqFMwkHPcPKisrvEfP6e12HGhycgLqaZ699uq+sfBQ8fnUfrW7p8bo53PSG0Y/ypA6fAbqoH4yjKRIz2Xa+VENm8pQkVGjYr66dXCoykH9itCRtvUMbQoA99YEGI2Rh2dAiqdx8PgMMxr7NToP4O9nOwdS1jPxd8DZdfNBkwPRxd7saQAsdQWYAFp3G+77zZXOzLO79uRQ3Qugy4DUzNxWbDpS7DpDWwAQjxckflhX06hDRb4U4/1kfElTNyACw8CJVZ4kcPwcBL8w3KzFuHuS+LvnrCF+awfeh0w6RHn13wNiUQ+5CbH9M9t5LvtOs5/tGbBimm/30DMnHjq2bzonlty51DtPy5aGcg9n9zOcvMzW+iD0yZCN0WfP4HxnLR1qAIPoZ+/p3CH5OsCssbfMnwah9oGb35BbACIyi/1gSe3cI4esBN7L1ob4U0b9dndEX/QHA6RcsEfKxzb5pwPKx12cNArW8FwxgEht0ZqEeOx3ZG8oOzk0bNHgRkiGCXcWfTqNVDpNVWv70LdO6gQTowJGYWCcPZ3UDJusGnmxtxWTS/LM1k5l7rDBnOxC///Jzdi+19ASGhHy05A/RG/9lR4VNInW4/N+EO7WvicPWhTYpJn2+t7rvu25OU4qVmW29xQadIxTgPs2Q+wJtGIbyjJvfOpDrnI9SJuKaPCYRycHAEeISDcYVHngzZHMyknMZM7/z0UDDj2oU61rFKHFXFPnlo2cMQu83zal1gLcAS/u31ngQ1xbPe9I+ZNaC0Ol1jX+u6aPHW0JHJIki2iZdk8wk7p7O1t5,1.0,Ulhdu0ODZyUoazB2v+QfcYqgc5+Bmg032fATGx0Ms5Qhp9nW2ewqjCFeXL0pdDmlvboaOhnX8pSgbU0yjnFw6YwNNLGISnRzwsSTVTSbmJFzqdo3O4kuMkAoZhmTYGW9qBp/npy8ZUVRpbm6um+rZfI+t9eTV4e8dj3FsuBwquk= - aedPphIq7MUjcFjJ8UwpFn9IB4NtMEB7/fM4BldbSqGN0mDCMBB4GYdRmfm8jlv5KBfPgT6eVnTzb4bz3mGZlxyoVExtzmd20XT2lWxv9C67Vb14KsJpCDT0w50wniaOW2fru6oBJsJDh8wkhFOUW9Y48l+w09qDCH9IfI1ugKpXoIlfDEfwpjbvIYwXxvdFiJ6uew2IMJPKkw7CzIxM2VShxWdSr7qQEdJwji3hE7i7vurEBdqfI4E03B8hcq4XsJpmWMsJn9RmsDAgYwyYM8dT6drTKnOZq0YOVNNpN7dMD/RCk/5TCsYc+lELU6JSvbQlD3MgvnLlElDA80hJEH+fShA+cU8VW9CSc7OivKVHKGr7EUeEZy0Z2a4U9h9riaCAW+MqVrjWetKerSXLLrsz6NHOiCccYfGw/cR9FbhLXiVKOmBqybQNihkCfgnNb02D9Nt20B/HYdvi1JL5ycXfv3vQixtrZCBh3+E/ey+0Cgy2h9QzFR/16O6i1yPL0wRlbw0hw6u4u4//U/QBAtSb2865Nyob6jkOaTmPtNxmz6WzP6gu+so/XPVVTbTM5uvoiMolvo6So6x3eoFyrv8e+GtCo4DqFMwkHPcPKisrvEfP6e12HGhycgLqaZ699uq+sfBQ8fnUfrW7p8bo53PSG0Y/ypA6fAbqoH4yjKRIz2Xa+VENm8pQkVGjYr66dXCoykH9itCRtvUMbQoA99YEGI2Rh2dAiqdx8PgMMxr7NToP4O9nOwdS1jPxd8DZdfNBkwPRxd7saQAsdQWYAFp3G+77zZXOzLO79uRQ3Qugy4DUzNxWbDpS7DpDWwAQjxckflhX06hDRb4U4/1kfElTNyACw8CJVZ4kcPwcBL8w3KzFuHuS+LvnrCF+awfeh0w6RHn13wNiUQ+5CbH9M9t5LvtOs5/tGbBimm/30DMnHjq2bzonlty51DtPy5aGcg9n9zOcvMzW+iD0yZCN0WfP4HxnLR1qAIPoZ+/p3CH5OsCssbfMnwah9oGb35BbACIyi/1gSe3cI4esBN7L1ob4U0b9dndEX/QHA6RcsEfKxzb5pwPKx12cNArW8FwxgEht0ZqEeOx3ZG8oOzk0bNHgRkiGCXcWfTqNVDpNVWv70LdO6gQTowJGYWCcPZ3UDJusGnmxtxWTS/LM1k5l7rDBnOxC///Jzdi+19ASGhHy05A/RG/9lR4VNInW4/N+EO7WvicPWhTYpJn2+t7rvu25OU4qVmW29xQadIxTgPs2Q+wJtGIbyjJvfOpDrnI9SJuKaPCYRycHAEeISDcYVHngzZHMyknMZM7/z0UDDj2oU61rFKHFXFPnlo2cMQu83zal1gLcAS/u31ngQ1xbPe9I+ZNaC0Ol1jX+u6aPHW0JHJIki2iZdk8wk7p7O1t5,0, rzfi rck fsha e ey u so lvnrcf tgbbimm jzdi rg n ,0
273200,8527719935.0,IssuesEvent,2018-11-02 20:30:21,WallarooLabs/wallaroo,https://api.github.com/repos/WallarooLabs/wallaroo,closed,Should parallelized stateless partition router use msg uid instead of step seq ids?,investigation priority: low,"Step seq ids have the advantage of allowing pure round robin sending to the stateless computations in parallel. The disadvantage is that those are really meant for resilience tracking.
Msg uids are meant to be available in all modes, but they aren't sequential, so round robin sending will only happen on average over the course of a run.",1.0,"Should parallelized stateless partition router use msg uid instead of step seq ids? - Step seq ids have the advantage of allowing pure round robin sending to the stateless computations in parallel. The disadvantage is that those are really meant for resilience tracking.
Msg uids are meant to be available in all modes, but they aren't sequential, so round robin sending will only happen on average over the course of a run.",0,should parallelized stateless partition router use msg uid instead of step seq ids step seq ids have the advantage of allowing pure round robin sending to the stateless computations in parallel the disadvantage is that those are really meant for resilience tracking msg uids are meant to be available in all modes but they aren t sequential so round robin sending will only happen on average over the course of a run ,0
112875,9604757401.0,IssuesEvent,2019-05-10 21:01:15,elastic/kibana,https://api.github.com/repos/elastic/kibana,closed,"Failing test: UI Functional Tests.test/functional/apps/visualize/_input_control_vis·js - visualize app input control visualization chained controls ""after all"" hook",failed-test,"A test failed on a tracked branch
```
{ NoSuchSessionError: This driver instance does not have a valid session ID (did you call WebDriver.quit()?) and may no longer be used.
at promise.finally (node_modules/selenium-webdriver/lib/webdriver.js:726:38)
at Object.thenFinally [as finally] (node_modules/selenium-webdriver/lib/promise.js:124:12)
at process._tickCallback (internal/process/next_tick.js:68:7) name: 'NoSuchSessionError', remoteStacktrace: '' }
```
First failure: [Jenkins Build](https://kibana-ci.elastic.co/job/elastic+kibana+master/JOB=kibana-ciGroup10,node=immutable/70/)
",1.0,"Failing test: UI Functional Tests.test/functional/apps/visualize/_input_control_vis·js - visualize app input control visualization chained controls ""after all"" hook - A test failed on a tracked branch
```
{ NoSuchSessionError: This driver instance does not have a valid session ID (did you call WebDriver.quit()?) and may no longer be used.
at promise.finally (node_modules/selenium-webdriver/lib/webdriver.js:726:38)
at Object.thenFinally [as finally] (node_modules/selenium-webdriver/lib/promise.js:124:12)
at process._tickCallback (internal/process/next_tick.js:68:7) name: 'NoSuchSessionError', remoteStacktrace: '' }
```
First failure: [Jenkins Build](https://kibana-ci.elastic.co/job/elastic+kibana+master/JOB=kibana-ciGroup10,node=immutable/70/)
",0,failing test ui functional tests test functional apps visualize input control vis·js visualize app input control visualization chained controls after all hook a test failed on a tracked branch nosuchsessionerror this driver instance does not have a valid session id did you call webdriver quit and may no longer be used at promise finally node modules selenium webdriver lib webdriver js at object thenfinally node modules selenium webdriver lib promise js at process tickcallback internal process next tick js name nosuchsessionerror remotestacktrace first failure ,0
1154,14698612786.0,IssuesEvent,2021-01-04 06:52:32,andrewchambers/bupstash,https://api.github.com/repos/andrewchambers/bupstash,closed,Building on ARM64 fails,portability,"Using following environment:
- rustc: 1.49.0 (e1884a8e3 2020-12-29)
- os: Linux rpi4 5.4.83-1-MANJARO-ARM #1 SMP PREEMPT aarch64 GNU/Linux
- commit: b3e2ee2e3d457cd71fb78854b4a2025879779d79 (current master)
`cargo build --release` fails with the following error
```shell
Compiling libc v0.2.80
Compiling cc v1.0.65
Compiling proc-macro2 v1.0.24
Compiling autocfg v1.0.1
Compiling unicode-xid v0.2.1
Compiling syn v1.0.51
Compiling memchr v2.3.4
Compiling typenum v1.12.0
Compiling version_check v0.9.2
Compiling pkg-config v0.3.19
Compiling lazy_static v1.4.0
Compiling serde_derive v1.0.117
Compiling serde v1.0.117
Compiling bitflags v1.2.1
Compiling cfg-if v1.0.0
Compiling ryu v1.0.5
Compiling unicode-width v0.1.8
Compiling regex-syntax v0.6.21
Compiling serde_json v1.0.59
Compiling subtle v2.3.0
Compiling nix v0.17.0
Compiling anyhow v1.0.34
Compiling linked-hash-map v0.5.3
Compiling cfg-if v0.1.10
Compiling fallible-streaming-iterator v0.1.9
Compiling codemap v0.1.3
Compiling arrayvec v0.5.2
Compiling arrayref v0.3.6
Compiling void v1.0.2
Compiling constant_time_eq v0.1.5
Compiling number_prefix v0.3.0
Compiling termcolor v1.1.2
Compiling itoa v0.4.6
Compiling fallible-iterator v0.2.0
Compiling smallvec v1.5.0
Compiling shlex v0.1.1
Compiling path-clean v0.1.0
Compiling rangemap v0.1.8
Compiling glob v0.3.0
Compiling once_cell v1.5.2
Compiling humantime v2.0.1
Compiling num-traits v0.2.14
Compiling crossbeam-utils v0.8.1
Compiling num-integer v0.1.44
Compiling generic-array v0.14.4
Compiling thread_local v1.0.1
Compiling bupstash v0.6.2 (/home/el/bupstash)
Compiling lz4-sys v1.9.2
Compiling libsqlite3-sys v0.18.0
Compiling blake3 v0.3.7
Compiling getopts v0.2.21
Compiling lru-cache v0.1.2
Compiling quote v1.0.7
Compiling aho-corasick v0.7.15
Compiling time v0.1.44
Compiling terminal_size v0.1.15
Compiling atty v0.2.14
Compiling xattr v0.2.2
Compiling filetime v0.2.13
Compiling fs2 v0.4.3
Compiling regex v1.4.2
Compiling crossbeam-channel v0.5.0
Compiling codemap-diagnostic v0.1.1
Compiling tar v0.4.30
Compiling digest v0.9.0
Compiling crypto-mac v0.8.0
Compiling console v0.13.0
Compiling thiserror-impl v1.0.22
Compiling lz4 v1.23.2
Compiling indicatif v0.15.0
Compiling thiserror v1.0.22
Compiling serde_bare v0.3.0
Compiling chrono v0.4.19
Compiling rusqlite v0.23.1
error[E0308]: mismatched types
--> src/base64.rs:15:13
|
15 | out_buf.as_mut_ptr() as *mut i8,
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected `u8`, found `i8`
|
= note: expected raw pointer `*mut u8`
found raw pointer `*mut i8`
error[E0308]: mismatched types
--> src/base64.rs:44:13
|
44 | data.as_ptr() as *const i8,
| ^^^^^^^^^^^^^^^^^^^^^^^^^^ expected `u8`, found `i8`
|
= note: expected raw pointer `*const u8`
found raw pointer `*const i8`
error[E0308]: mismatched types
--> src/base64.rs:48:13
|
48 | std::ptr::null_mut::<*const i8>(),
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected `u8`, found `i8`
|
= note: expected raw pointer `*mut *const u8`
found raw pointer `*mut *const i8`
error: aborting due to 3 previous errors
For more information about this error, try `rustc --explain E0308`.
error: could not compile `bupstash`
To learn more, run the command again with --verbose.
```",True,"Building on ARM64 fails - Using following environment:
- rustc: 1.49.0 (e1884a8e3 2020-12-29)
- os: Linux rpi4 5.4.83-1-MANJARO-ARM #1 SMP PREEMPT aarch64 GNU/Linux
- commit: b3e2ee2e3d457cd71fb78854b4a2025879779d79 (current master)
`cargo build --release` fails with the following error
```shell
Compiling libc v0.2.80
Compiling cc v1.0.65
Compiling proc-macro2 v1.0.24
Compiling autocfg v1.0.1
Compiling unicode-xid v0.2.1
Compiling syn v1.0.51
Compiling memchr v2.3.4
Compiling typenum v1.12.0
Compiling version_check v0.9.2
Compiling pkg-config v0.3.19
Compiling lazy_static v1.4.0
Compiling serde_derive v1.0.117
Compiling serde v1.0.117
Compiling bitflags v1.2.1
Compiling cfg-if v1.0.0
Compiling ryu v1.0.5
Compiling unicode-width v0.1.8
Compiling regex-syntax v0.6.21
Compiling serde_json v1.0.59
Compiling subtle v2.3.0
Compiling nix v0.17.0
Compiling anyhow v1.0.34
Compiling linked-hash-map v0.5.3
Compiling cfg-if v0.1.10
Compiling fallible-streaming-iterator v0.1.9
Compiling codemap v0.1.3
Compiling arrayvec v0.5.2
Compiling arrayref v0.3.6
Compiling void v1.0.2
Compiling constant_time_eq v0.1.5
Compiling number_prefix v0.3.0
Compiling termcolor v1.1.2
Compiling itoa v0.4.6
Compiling fallible-iterator v0.2.0
Compiling smallvec v1.5.0
Compiling shlex v0.1.1
Compiling path-clean v0.1.0
Compiling rangemap v0.1.8
Compiling glob v0.3.0
Compiling once_cell v1.5.2
Compiling humantime v2.0.1
Compiling num-traits v0.2.14
Compiling crossbeam-utils v0.8.1
Compiling num-integer v0.1.44
Compiling generic-array v0.14.4
Compiling thread_local v1.0.1
Compiling bupstash v0.6.2 (/home/el/bupstash)
Compiling lz4-sys v1.9.2
Compiling libsqlite3-sys v0.18.0
Compiling blake3 v0.3.7
Compiling getopts v0.2.21
Compiling lru-cache v0.1.2
Compiling quote v1.0.7
Compiling aho-corasick v0.7.15
Compiling time v0.1.44
Compiling terminal_size v0.1.15
Compiling atty v0.2.14
Compiling xattr v0.2.2
Compiling filetime v0.2.13
Compiling fs2 v0.4.3
Compiling regex v1.4.2
Compiling crossbeam-channel v0.5.0
Compiling codemap-diagnostic v0.1.1
Compiling tar v0.4.30
Compiling digest v0.9.0
Compiling crypto-mac v0.8.0
Compiling console v0.13.0
Compiling thiserror-impl v1.0.22
Compiling lz4 v1.23.2
Compiling indicatif v0.15.0
Compiling thiserror v1.0.22
Compiling serde_bare v0.3.0
Compiling chrono v0.4.19
Compiling rusqlite v0.23.1
error[E0308]: mismatched types
--> src/base64.rs:15:13
|
15 | out_buf.as_mut_ptr() as *mut i8,
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected `u8`, found `i8`
|
= note: expected raw pointer `*mut u8`
found raw pointer `*mut i8`
error[E0308]: mismatched types
--> src/base64.rs:44:13
|
44 | data.as_ptr() as *const i8,
| ^^^^^^^^^^^^^^^^^^^^^^^^^^ expected `u8`, found `i8`
|
= note: expected raw pointer `*const u8`
found raw pointer `*const i8`
error[E0308]: mismatched types
--> src/base64.rs:48:13
|
48 | std::ptr::null_mut::<*const i8>(),
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected `u8`, found `i8`
|
= note: expected raw pointer `*mut *const u8`
found raw pointer `*mut *const i8`
error: aborting due to 3 previous errors
For more information about this error, try `rustc --explain E0308`.
error: could not compile `bupstash`
To learn more, run the command again with --verbose.
```",1,building on fails using following environment rustc os linux manjaro arm smp preempt gnu linux commit current master cargo build release fails with the following error shell compiling libc compiling cc compiling proc compiling autocfg compiling unicode xid compiling syn compiling memchr compiling typenum compiling version check compiling pkg config compiling lazy static compiling serde derive compiling serde compiling bitflags compiling cfg if compiling ryu compiling unicode width compiling regex syntax compiling serde json compiling subtle compiling nix compiling anyhow compiling linked hash map compiling cfg if compiling fallible streaming iterator compiling codemap compiling arrayvec compiling arrayref compiling void compiling constant time eq compiling number prefix compiling termcolor compiling itoa compiling fallible iterator compiling smallvec compiling shlex compiling path clean compiling rangemap compiling glob compiling once cell compiling humantime compiling num traits compiling crossbeam utils compiling num integer compiling generic array compiling thread local compiling bupstash home el bupstash compiling sys compiling sys compiling compiling getopts compiling lru cache compiling quote compiling aho corasick compiling time compiling terminal size compiling atty compiling xattr compiling filetime compiling compiling regex compiling crossbeam channel compiling codemap diagnostic compiling tar compiling digest compiling crypto mac compiling console compiling thiserror impl compiling compiling indicatif compiling thiserror compiling serde bare compiling chrono compiling rusqlite error mismatched types src rs out buf as mut ptr as mut expected found note expected raw pointer mut found raw pointer mut error mismatched types src rs data as ptr as const expected found note expected raw pointer const found raw pointer const error mismatched types src rs std ptr null mut expected found note expected raw pointer mut const found raw pointer mut const error aborting due to previous errors for more information about this error try rustc explain error could not compile bupstash to learn more run the command again with verbose ,1
368032,10864992349.0,IssuesEvent,2019-11-14 18:00:55,siteorigin/so-widgets-bundle,https://api.github.com/repos/siteorigin/so-widgets-bundle,opened,SO Widget Block: Features defaults aren't present,bug priority-2,"In the SO Widget Block, the SiteOrigin Features defaults aren't present.
Icon container size
Icon size
Features per row
Empty, empty and zero are incorrect defaults for those fields.",1.0,"SO Widget Block: Features defaults aren't present - In the SO Widget Block, the SiteOrigin Features defaults aren't present.
Icon container size
Icon size
Features per row
Empty, empty and zero are incorrect defaults for those fields.",0,so widget block features defaults aren t present in the so widget block the siteorigin features defaults aren t present img width alt edit page ‹ siteorigin — wordpress src icon container size icon size features per row empty empty and zero are incorrect defaults for those fields ,0
168843,13104276612.0,IssuesEvent,2020-08-04 09:59:14,web-platform-tests/wpt,https://api.github.com/repos/web-platform-tests/wpt,opened,testharness.js reuses an existing #log,cookies testharness.js,"see e.g. http://wpt.live/cookies/http-state/domain-tests.html (cookies/http-state/domain-tests.html)
looking at the testharness.js source this has always been the case, though? this does make these tests rather hard to read however…",1.0,"testharness.js reuses an existing #log - see e.g. http://wpt.live/cookies/http-state/domain-tests.html (cookies/http-state/domain-tests.html)
looking at the testharness.js source this has always been the case, though? this does make these tests rather hard to read however…",0,testharness js reuses an existing log see e g cookies http state domain tests html looking at the testharness js source this has always been the case though this does make these tests rather hard to read however…,0
28,2667158310.0,IssuesEvent,2015-03-22 09:46:12,funcoeszz/funcoeszz,https://api.github.com/repos/funcoeszz/funcoeszz,closed,zzdatafmt: identificador ANO com erro no BSD,bug portabilidade,"```
$ ./run zzdatafmt.sh
..................................................................................................................
--------------------------------------------------------------------------------------------------------------------------------------------------------------
[FAILED #114, line 144] zzdatafmt -f ANO 01/01/1000
@@ -1 +1,3 @@
+usage: date [-jnu] [-d dst] [-r seconds] [-t west] [-v[+|-]val[ymwdHMS]] ...
+ [-f fmt date | [[[mm]dd]HH]MM[[cc]yy][.ss]] [+format]
mil
--------------------------------------------------------------------------------------------------------------------------------------------------------------
.
--------------------------------------------------------------------------------------------------------------------------------------------------------------
[FAILED #115, line 145] zzdatafmt -f ANO 01/01/1900
@@ -1 +1,3 @@
+usage: date [-jnu] [-d dst] [-r seconds] [-t west] [-v[+|-]val[ymwdHMS]] ...
+ [-f fmt date | [[[mm]dd]HH]MM[[cc]yy][.ss]] [+format]
mil e novecentos
--------------------------------------------------------------------------------------------------------------------------------------------------------------
.................................................................................................................................................................................................................................................................................................................................................
FAIL: 2 of 452 tests failed
```
Curioso que o erro acontece somente com alguns anos específicos: 1000 e 1900. Os demais estão OK, como: 1990, 1999, 2000, 2001, 2010.",True,"zzdatafmt: identificador ANO com erro no BSD - ```
$ ./run zzdatafmt.sh
..................................................................................................................
--------------------------------------------------------------------------------------------------------------------------------------------------------------
[FAILED #114, line 144] zzdatafmt -f ANO 01/01/1000
@@ -1 +1,3 @@
+usage: date [-jnu] [-d dst] [-r seconds] [-t west] [-v[+|-]val[ymwdHMS]] ...
+ [-f fmt date | [[[mm]dd]HH]MM[[cc]yy][.ss]] [+format]
mil
--------------------------------------------------------------------------------------------------------------------------------------------------------------
.
--------------------------------------------------------------------------------------------------------------------------------------------------------------
[FAILED #115, line 145] zzdatafmt -f ANO 01/01/1900
@@ -1 +1,3 @@
+usage: date [-jnu] [-d dst] [-r seconds] [-t west] [-v[+|-]val[ymwdHMS]] ...
+ [-f fmt date | [[[mm]dd]HH]MM[[cc]yy][.ss]] [+format]
mil e novecentos
--------------------------------------------------------------------------------------------------------------------------------------------------------------
.................................................................................................................................................................................................................................................................................................................................................
FAIL: 2 of 452 tests failed
```
Curioso que o erro acontece somente com alguns anos específicos: 1000 e 1900. Os demais estão OK, como: 1990, 1999, 2000, 2001, 2010.",1,zzdatafmt identificador ano com erro no bsd run zzdatafmt sh zzdatafmt f ano usage date val dd hh mm yy mil zzdatafmt f ano usage date val dd hh mm yy mil e novecentos fail of tests failed curioso que o erro acontece somente com alguns anos específicos e os demais estão ok como ,1
252945,21640799313.0,IssuesEvent,2022-05-05 18:33:57,damccorm/test-migration-target,https://api.github.com/repos/damccorm/test-migration-target,opened,Document Jenkins ghprb commands,P3 testing task,"Summarize current ghprb (github pull request builder plugin) commands for people to easily find and use instead of to check each groovy file.
commands includes:
""retest this please"",
command to run specific Jenkins build (defined under .test\-infra/jenkins/job_beam_*.groovy).
Imported from Jira [BEAM-3068](https://issues.apache.org/jira/browse/BEAM-3068). Original Jira may contain additional context.
Reported by: markflyhigh.",1.0,"Document Jenkins ghprb commands - Summarize current ghprb (github pull request builder plugin) commands for people to easily find and use instead of to check each groovy file.
commands includes:
""retest this please"",
command to run specific Jenkins build (defined under .test\-infra/jenkins/job_beam_*.groovy).
Imported from Jira [BEAM-3068](https://issues.apache.org/jira/browse/BEAM-3068). Original Jira may contain additional context.
Reported by: markflyhigh.",0,document jenkins ghprb commands summarize current ghprb github pull request builder plugin commands for people to easily find and use instead of to check each groovy file commands includes retest this please command to run specific jenkins build defined under test infra jenkins job beam groovy imported from jira original jira may contain additional context reported by markflyhigh ,0
753768,26360830604.0,IssuesEvent,2023-01-11 13:16:13,Dessia-tech/dessia_common,https://api.github.com/repos/Dessia-tech/dessia_common,closed,Problem with references in case of custom eq,Priority: High Status: To be discussed,"Redefining eq of an object may create side effects in serialization.
Subobjects may not be equal strictly but can be pointed outside of the object, and won't be in the memo
solution: when an object is matched in the memo, keep exploring its subattributes in case they are used elsewhere",1.0,"Problem with references in case of custom eq - Redefining eq of an object may create side effects in serialization.
Subobjects may not be equal strictly but can be pointed outside of the object, and won't be in the memo
solution: when an object is matched in the memo, keep exploring its subattributes in case they are used elsewhere",0,problem with references in case of custom eq redefining eq of an object may create side effects in serialization subobjects may not be equal strictly but can be pointed outside of the object and won t be in the memo solution when an object is matched in the memo keep exploring its subattributes in case they are used elsewhere,0
698,9422081411.0,IssuesEvent,2019-04-11 08:33:01,magnumripper/JohnTheRipper,https://api.github.com/repos/magnumripper/JohnTheRipper,closed,Add preprocessor warnings for formats that are skipped,Fixed - pending verify portability,"Otherwise, users and developers will not notice that (and why) they do not get all the formats.
./configure of course mentions when you don't get OpenCL and CUDA formats in the ""feature summary"" output.
It also mentions pkzip format and generic crypt(3) format.
But it doesn't mention that you don't get OpenVMS format on big endian systems, or if some formats are suppressed due to missing CPU features (SSE4.1, etc).
In these cases, we should add a preprocessor warning, either for DEBUG builds or for all builds.
See comments on commit https://github.com/magnumripper/JohnTheRipper/commit/b1da77d8a235a0517782700f1cbe536e4203b0ed.
At least these formats can be skipped depending on endianness, CPU features, etc:
- [x] Stribog-256
- [x] Stribog-512
- [x] OpenVMS
So we need to add preprocessor warnings in case they are skipped.
",True,"Add preprocessor warnings for formats that are skipped - Otherwise, users and developers will not notice that (and why) they do not get all the formats.
./configure of course mentions when you don't get OpenCL and CUDA formats in the ""feature summary"" output.
It also mentions pkzip format and generic crypt(3) format.
But it doesn't mention that you don't get OpenVMS format on big endian systems, or if some formats are suppressed due to missing CPU features (SSE4.1, etc).
In these cases, we should add a preprocessor warning, either for DEBUG builds or for all builds.
See comments on commit https://github.com/magnumripper/JohnTheRipper/commit/b1da77d8a235a0517782700f1cbe536e4203b0ed.
At least these formats can be skipped depending on endianness, CPU features, etc:
- [x] Stribog-256
- [x] Stribog-512
- [x] OpenVMS
So we need to add preprocessor warnings in case they are skipped.
",1,add preprocessor warnings for formats that are skipped otherwise users and developers will not notice that and why they do not get all the formats configure of course mentions when you don t get opencl and cuda formats in the feature summary output it also mentions pkzip format and generic crypt format but it doesn t mention that you don t get openvms format on big endian systems or if some formats are suppressed due to missing cpu features etc in these cases we should add a preprocessor warning either for debug builds or for all builds see comments on commit at least these formats can be skipped depending on endianness cpu features etc stribog stribog openvms so we need to add preprocessor warnings in case they are skipped ,1
1280,17109499826.0,IssuesEvent,2021-07-10 02:22:44,verilator/verilator,https://api.github.com/repos/verilator/verilator,closed,when I make verilator@4.210 and it failed in spack,area: portability status: asked reporter,"When I make verilator@4.210 and it failed in spack:
```console
[root@centos8 ~]# spack install -y -v --keep-stage --dont-restage --no-checksum --fail-fast verilator@4.210
[+] /home/spack/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/libiconv-1.16-5gvzkthgdyzwipoooaqkilcjetijwrqh
[+] /home/spack/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/pkgconf-1.7.3-qnwffdxtfpwlagzm34msmtobii7iksqf
[+] /home/spack/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/xz-5.2.5-5npvhv6x4hagkymz4dw2suvda5r6hpjk
[+] /home/spack/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/zlib-1.2.11-ysclwr6pk6lufuzqp3tf5la4lreyq7qt
[+] /home/spack/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/berkeley-db-18.1.40-vfyntuwixkzvr2rt2zjeijiqmtuu6brd
[+] /home/spack/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/libsigsegv-2.12-dkxnizjmb3vgeurskkjiysmjcldaaw7x
[+] /home/spack/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/diffutils-3.7-z6t4zfsla623dzpnpwebutiuoenvlt5d
[+] /home/spack/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/tar-1.32-q6qjhc42v4z7prpkjfsowtbjdceh7dbb
[+] /home/spack/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/ncurses-6.2-l6lhmwy4kpgkzfmgiiq2bikklnvulcyf
[+] /home/spack/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/libxml2-2.9.10-cn4eee7gqv7drycs6nybr5pxxlc5uivl
[+] /home/spack/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/m4-1.4.18-xjpzxna5lzr3g4gku6mjy5vz2jiyjvpi
[+] /home/spack/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/bzip2-1.0.8-urctfqbstx2mii7jxupeqeypgbrul37q
[+] /home/spack/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/readline-8.0-6otekrq5w5yaqh7xbi4to7hkewmnddl3
[+] /home/spack/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/libtool-2.4.6-rmp7rhxalhqckvkbsxiicnq5gedv5ygy
[+] /home/spack/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/gettext-0.21-pghfs6ximmdo3cyyhouf5jtwowtyngxf
[+] /home/spack/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/gdbm-1.18.1-sbfql4dkw5n2cri4z34km7dmrjres4fg
[+] /home/spack/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/perl-5.32.1-rs3gej7g7fp5pibnhenekowwbdfphevg
[+] /home/spack/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/help2man-1.47.16-x66avrn7sgvlx7anle7bf4lekqqf232z
[+] /home/spack/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/autoconf-2.69-rl7fity4bcttga7bffj7wisz673gg232
[+] /home/spack/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/texinfo-6.5-jcwiw2ljyr4uauolx7kxbhrshdprlvpj
[+] /home/spack/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/bison-3.7.6-krrkhqqfsuqtfhcnusuenmhz3ukilmqf
[+] /home/spack/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/automake-1.16.3-ycbmiqis6f652tvuz3vkysewhl2m6jky
[+] /home/spack/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/findutils-4.6.0-q3z2d45sn25fcrpvqbfo6nbqy4orlioo
[+] /home/spack/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/flex-2.6.4-qsqamz4edx44n4uqjl57m7trdwqm3zyd
==> Installing verilator-4.210-u6zs26mdvynhr427nts6kbalrbilqyzt
==> No binary for verilator-4.210-u6zs26mdvynhr427nts6kbalrbilqyzt found: installing from source
==> Using cached archive: /home/spack/spack/var/spack/cache/_source-cache/archive/2a/2a821f25e5766884e7c22076790810a386725df31ee9eac58862977b347e2018.tgz
==> No patches needed for verilator
==> verilator: Executing phase: 'autoreconf'
==> verilator: Executing phase: 'configure'
==> [2021-07-09-11:03:03.627507] '/tmp/root/spack-stage/spack-stage-verilator-4.210-u6zs26mdvynhr427nts6kbalrbilqyzt/spack-src/configure' '--prefix=/home/spack/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/verilator-4.210-u6zs26mdvynhr427nts6kbalrbilqyzt'
configuring for Verilator 4.210 2021-07-07
checking whether to perform partial static linking of Verilator binary... yes
checking whether to use tcmalloc... check
checking whether to use -m32... no
checking whether to build for coverage collection... no
checking whether to use hardcoded paths... yes
checking whether to show and stop on compilation warnings... no
checking whether to run long tests... no
checking for gcc... /home/spack/spack/lib/spack/env/gcc/gcc
checking whether the C compiler works... yes
checking for C compiler default output file name... a.out
checking for suffix of executables...
checking whether we are cross compiling... no
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether /home/spack/spack/lib/spack/env/gcc/gcc accepts -g... yes
checking for /home/spack/spack/lib/spack/env/gcc/gcc option to accept ISO C89... none needed
checking whether we are using the GNU C++ compiler... yes
checking whether /home/spack/spack/lib/spack/env/gcc/g++ accepts -g... yes
checking for a BSD-compatible install... /usr/bin/install -c
compiler is /home/spack/spack/lib/spack/env/gcc/g++ --version = g++ (GCC) 8.4.1 20200928 (Red Hat 8.4.1-1)
checking that C++ compiler can compile simple program... yes
checking for ar... ar
checking for perl... /home/spack/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/perl-5.32.1-rs3gej7g7fp5pibnhenekowwbdfphevg/bin/perl
checking for python3... /usr/bin/python3
checking for flex... /home/spack/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/flex-2.6.4-qsqamz4edx44n4uqjl57m7trdwqm3zyd/bin/flex
/home/spack/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/flex-2.6.4-qsqamz4edx44n4uqjl57m7trdwqm3zyd/bin/flex --version = flex 2.6.4
checking for bison... /home/spack/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/bison-3.7.6-krrkhqqfsuqtfhcnusuenmhz3ukilmqf/bin/bison
/home/spack/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/bison-3.7.6-krrkhqqfsuqtfhcnusuenmhz3ukilmqf/bin/bison --version = bison (GNU Bison) 3.7.6
checking for ccache... no
checking how to run the C++ preprocessor... /home/spack/spack/lib/spack/env/gcc/g++ -E
checking for grep that handles long lines and -e... /usr/bin/grep
checking for egrep... /usr/bin/grep -E
checking for ANSI C header files... yes
checking for sys/types.h... yes
checking for sys/stat.h... yes
checking for stdlib.h... yes
checking for string.h... yes
checking for memory.h... yes
checking for strings.h... yes
checking for inttypes.h... yes
checking for stdint.h... yes
checking for unistd.h... yes
checking for size_t... yes
checking for size_t... (cached) yes
checking for inline... inline
checking whether /home/spack/spack/lib/spack/env/gcc/g++ accepts -pg... yes
checking whether /home/spack/spack/lib/spack/env/gcc/g++ accepts -std=gnu++14... yes
checking whether /home/spack/spack/lib/spack/env/gcc/g++ accepts -std=c++03... yes
checking whether /home/spack/spack/lib/spack/env/gcc/g++ accepts -Wextra... yes
checking whether /home/spack/spack/lib/spack/env/gcc/g++ accepts -Wfloat-conversion... yes
checking whether /home/spack/spack/lib/spack/env/gcc/g++ accepts -Wlogical-op... yes
checking whether /home/spack/spack/lib/spack/env/gcc/g++ accepts -Wthread-safety... no
checking whether /home/spack/spack/lib/spack/env/gcc/g++ accepts -Qunused-arguments... no
checking whether /home/spack/spack/lib/spack/env/gcc/g++ accepts -faligned-new... yes
checking whether /home/spack/spack/lib/spack/env/gcc/g++ accepts -Wno-unused-parameter... yes
checking whether /home/spack/spack/lib/spack/env/gcc/g++ accepts -Wno-shadow... yes
checking whether /home/spack/spack/lib/spack/env/gcc/g++ accepts -Wno-char-subscripts... yes
checking whether /home/spack/spack/lib/spack/env/gcc/g++ accepts -Wno-null-conversion... no
checking whether /home/spack/spack/lib/spack/env/gcc/g++ accepts -Wno-parentheses-equality... no
checking whether /home/spack/spack/lib/spack/env/gcc/g++ accepts -Wno-unused... yes
checking whether /home/spack/spack/lib/spack/env/gcc/g++ accepts -Og... yes
checking whether /home/spack/spack/lib/spack/env/gcc/g++ accepts -ggdb... yes
checking whether /home/spack/spack/lib/spack/env/gcc/g++ accepts -gz... yes
checking whether /home/spack/spack/lib/spack/env/gcc/g++ linker accepts -gz... yes
checking whether /home/spack/spack/lib/spack/env/gcc/g++ accepts -faligned-new... yes
checking whether /home/spack/spack/lib/spack/env/gcc/g++ accepts -fbracket-depth=4096... no
checking whether /home/spack/spack/lib/spack/env/gcc/g++ accepts -fcf-protection=none... yes
checking whether /home/spack/spack/lib/spack/env/gcc/g++ accepts -mno-cet... no
checking whether /home/spack/spack/lib/spack/env/gcc/g++ accepts -Qunused-arguments... no
checking whether /home/spack/spack/lib/spack/env/gcc/g++ accepts -Wno-bool-operation... yes
checking whether /home/spack/spack/lib/spack/env/gcc/g++ accepts -Wno-tautological-bitwise-compare... no
checking whether /home/spack/spack/lib/spack/env/gcc/g++ accepts -Wno-parentheses-equality... no
checking whether /home/spack/spack/lib/spack/env/gcc/g++ accepts -Wno-sign-compare... yes
checking whether /home/spack/spack/lib/spack/env/gcc/g++ accepts -Wno-uninitialized... yes
checking whether /home/spack/spack/lib/spack/env/gcc/g++ accepts -Wno-unused-but-set-variable... yes
checking whether /home/spack/spack/lib/spack/env/gcc/g++ accepts -Wno-unused-parameter... yes
checking whether /home/spack/spack/lib/spack/env/gcc/g++ accepts -Wno-unused-variable... yes
checking whether /home/spack/spack/lib/spack/env/gcc/g++ accepts -Wno-shadow... yes
checking whether /home/spack/spack/lib/spack/env/gcc/g++ linker accepts -mt... no
checking whether /home/spack/spack/lib/spack/env/gcc/g++ linker accepts -pthread... yes
checking whether /home/spack/spack/lib/spack/env/gcc/g++ linker accepts -lpthread... yes
checking whether /home/spack/spack/lib/spack/env/gcc/g++ linker accepts -latomic... yes
checking whether /home/spack/spack/lib/spack/env/gcc/g++ linker accepts -static-libgcc... yes
checking whether /home/spack/spack/lib/spack/env/gcc/g++ linker accepts -static-libstdc++... no
checking whether /home/spack/spack/lib/spack/env/gcc/g++ linker accepts -Xlinker -gc-sections... yes
checking whether /home/spack/spack/lib/spack/env/gcc/g++ linker accepts -lpthread... yes
checking whether /home/spack/spack/lib/spack/env/gcc/g++ linker accepts -lbcrypt... no
checking whether /home/spack/spack/lib/spack/env/gcc/g++ linker accepts -lpsapi... no
checking whether /home/spack/spack/lib/spack/env/gcc/g++ linker accepts -l:libtcmalloc_minimal.a... no
checking whether /home/spack/spack/lib/spack/env/gcc/g++ supports C++11... yes
checking for struct stat.st_mtim.tv_nsec... yes
checking whether SystemC is found (in system path)... no
configure: creating ./config.status
config.status: creating Makefile
config.status: creating src/Makefile
config.status: creating src/Makefile_obj
config.status: creating include/verilated.mk
config.status: creating include/verilated_config.h
config.status: creating verilator.pc
config.status: creating verilator-config.cmake
config.status: creating verilator-config-version.cmake
config.status: creating src/config_build.h
config.status: src/config_build.h is unchanged
Now type 'make' (or sometimes 'gmake') to build Verilator.
==> verilator: Executing phase: 'build'
==> [2021-07-09-11:03:12.017110] 'make' '-j8'
------------------------------------------------------------
making verilator in src
make -C src
make[1]: Entering directory '/tmp/root/spack-stage/spack-stage-verilator-4.210-u6zs26mdvynhr427nts6kbalrbilqyzt/spack-src/src'
make -C obj_dbg -j 1 TGT=../../bin/verilator_bin_dbg VL_DEBUG=1 -f ../Makefile_obj serial
make -C obj_dbg TGT=../../bin/verilator_coverage_bin_dbg VL_DEBUG=1 VL_VLCOV=1 -f ../Makefile_obj serial_vlcov
make -C obj_opt -j 1 TGT=../../bin/verilator_bin -f ../Makefile_obj serial
make[2]: Entering directory '/tmp/root/spack-stage/spack-stage-verilator-4.210-u6zs26mdvynhr427nts6kbalrbilqyzt/spack-src/src'
make[2]: warning: -jN forced in submake: disabling jobserver mode.
make[2]: Entering directory '/tmp/root/spack-stage/spack-stage-verilator-4.210-u6zs26mdvynhr427nts6kbalrbilqyzt/spack-src/src'
make[2]: warning: -jN forced in submake: disabling jobserver mode.
make[2]: Nothing to be done for 'serial'.
make[2]: Leaving directory '/tmp/root/spack-stage/spack-stage-verilator-4.210-u6zs26mdvynhr427nts6kbalrbilqyzt/spack-src/src/obj_dbg'
make -C obj_dbg TGT=../../bin/verilator_bin_dbg VL_DEBUG=1 -f ../Makefile_obj
make[2]: Entering directory '/tmp/root/spack-stage/spack-stage-verilator-4.210-u6zs26mdvynhr427nts6kbalrbilqyzt/spack-src/src/obj_dbg'
make[2]: Nothing to be done for 'serial_vlcov'.
make[2]: Leaving directory '/tmp/root/spack-stage/spack-stage-verilator-4.210-u6zs26mdvynhr427nts6kbalrbilqyzt/spack-src/src/obj_dbg'
make -C obj_dbg TGT=../../bin/verilator_coverage_bin_dbg VL_DEBUG=1 VL_VLCOV=1 -f ../Makefile_obj
make[2]: Nothing to be done for 'serial'.
make[2]: Leaving directory '/tmp/root/spack-stage/spack-stage-verilator-4.210-u6zs26mdvynhr427nts6kbalrbilqyzt/spack-src/src/obj_opt'
make -C obj_opt TGT=../../bin/verilator_bin -f ../Makefile_obj
make[2]: Entering directory '/tmp/root/spack-stage/spack-stage-verilator-4.210-u6zs26mdvynhr427nts6kbalrbilqyzt/spack-src/src/obj_dbg'
make[2]: Entering directory '/tmp/root/spack-stage/spack-stage-verilator-4.210-u6zs26mdvynhr427nts6kbalrbilqyzt/spack-src/src/obj_dbg'
make[2]: Entering directory '/tmp/root/spack-stage/spack-stage-verilator-4.210-u6zs26mdvynhr427nts6kbalrbilqyzt/spack-src/src/obj_opt'
Compile flags: /home/spack/spack/lib/spack/env/gcc/g++ -Og -ggdb -gz -DVL_DEBUG -D_GLIBCXX_DEBUG -MMD -I. -I.. -I.. -I../../include -I../../include -MP -faligned-new -Wno-unused-parameter -Wno-shadow -DDEFENV_SYSTEMC="""" -DDEFENV_SYSTEMC_ARCH="""" -DDEFENV_SYSTEMC_INCLUDE="""" -DDEFENV_SYSTEMC_LIBDIR="""" -DDEFENV_VERILATOR_ROOT=""/home/spack/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/verilator-4.210-u6zs26mdvynhr427nts6kbalrbilqyzt/share/verilator""
Compile flags: /home/spack/spack/lib/spack/env/gcc/g++ -Og -ggdb -gz -DVL_DEBUG -D_GLIBCXX_DEBUG -MMD -I. -I.. -I.. -I../../include -I../../include -MP -faligned-new -Wno-unused-parameter -Wno-shadow -DDEFENV_SYSTEMC="""" -DDEFENV_SYSTEMC_ARCH="""" -DDEFENV_SYSTEMC_INCLUDE="""" -DDEFENV_SYSTEMC_LIBDIR="""" -DDEFENV_VERILATOR_ROOT=""/home/spack/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/verilator-4.210-u6zs26mdvynhr427nts6kbalrbilqyzt/share/verilator""
Compile flags: /home/spack/spack/lib/spack/env/gcc/g++ -O2 -MMD -I. -I.. -I.. -I../../include -I../../include -MP -faligned-new -Wno-unused-parameter -Wno-shadow -DDEFENV_SYSTEMC="""" -DDEFENV_SYSTEMC_ARCH="""" -DDEFENV_SYSTEMC_INCLUDE="""" -DDEFENV_SYSTEMC_LIBDIR="""" -DDEFENV_VERILATOR_ROOT=""/home/spack/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/verilator-4.210-u6zs26mdvynhr427nts6kbalrbilqyzt/share/verilator""
Linking ../../bin/verilator_coverage_bin_dbg...
/home/spack/spack/lib/spack/env/gcc/g++ -gz -static-libgcc -Xlinker -gc-sections -o ../../bin/verilator_coverage_bin_dbg VlcMain.o -lpthread -lm
make[2]: Leaving directory '/tmp/root/spack-stage/spack-stage-verilator-4.210-u6zs26mdvynhr427nts6kbalrbilqyzt/spack-src/src/obj_opt'
Linking ../../bin/verilator_bin_dbg...
/home/spack/spack/lib/spack/env/gcc/g++ -gz -static-libgcc -Xlinker -gc-sections -o ../../bin/verilator_bin_dbg Verilator.o V3Active.o V3ActiveTop.o V3Assert.o V3AssertPre.o V3Ast.o V3AstNodes.o V3Begin.o V3Branch.o V3Broken.o V3CCtors.o V3CUse.o V3Case.o V3Cast.o V3Cdc.o V3Changed.o V3Class.o V3Clean.o V3Clock.o V3Combine.o V3Config.o V3Const__gen.o V3Coverage.o V3CoverageJoin.o V3Dead.o V3Delayed.o V3Depth.o V3DepthBlock.o V3Descope.o V3DupFinder.o V3EmitC.o V3EmitCBase.o V3EmitCConstPool.o V3EmitCFunc.o V3EmitCInlines.o V3EmitCMain.o V3EmitCMake.o V3EmitCModel.o V3EmitCSyms.o V3EmitMk.o V3EmitV.o V3EmitXml.o V3Error.o V3Expand.o V3File.o V3FileLine.o V3Gate.o V3GenClk.o V3Global.o V3Graph.o V3GraphAlg.o V3GraphAcyc.o V3GraphDfa.o V3GraphPathChecker.o V3GraphTest.o V3Hash.o V3Hasher.o V3HierBlock.o V3Inline.o V3Inst.o V3InstrCount.o V3Life.o V3LifePost.o V3LinkCells.o V3LinkDot.o V3LinkJump.o V3LinkInc.o V3LinkLValue.o V3LinkLevel.o V3LinkParse.o V3LinkResolve.o V3Localize.o V3MergeCond.o V3Name.o V3Number.o V3OptionParser.o V3Options.o V3Order.o V3Os.o V3Param.o V3Partition.o V3PreShell.o V3Premit.o V3ProtectLib.o V3Randomize.o V3Reloop.o V3Scope.o V3Scoreboard.o V3Slice.o V3Split.o V3SplitAs.o V3SplitVar.o V3Stats.o V3StatsReport.o V3String.o V3Subst.o V3Table.o V3Task.o V3Trace.o V3TraceDecl.o V3Tristate.o V3TSP.o V3Undriven.o V3Unknown.o V3Unroll.o V3Waiver.o V3Width.o V3WidthSel.o V3ParseImp.o V3ParseGrammar.o V3ParseLex.o V3PreProc.o -lpthread -lm
/usr/bin/ld: VlcMain.o: unable to initialize decompress status for section .debug_info
/usr/bin/ld: VlcMain.o: unable to initialize decompress status for section .debug_info
/usr/bin/ld: VlcMain.o: unable to initialize decompress status for section .debug_info
/usr/bin/ld: VlcMain.o: unable to initialize decompress status for section .debug_info
VlcMain.o: file not recognized: File format not recognized
collect2: error: ld returned 1 exit status
make[2]: *** [../Makefile_obj:287: ../../bin/verilator_coverage_bin_dbg] Error 1
make[2]: Leaving directory '/tmp/root/spack-stage/spack-stage-verilator-4.210-u6zs26mdvynhr427nts6kbalrbilqyzt/spack-src/src/obj_dbg'
make[1]: *** [Makefile:70: ../bin/verilator_coverage_bin_dbg] Error 2
make[1]: *** Waiting for unfinished jobs....
/usr/bin/ld: Verilator.o: unable to initialize decompress status for section .debug_info
/usr/bin/ld: Verilator.o: unable to initialize decompress status for section .debug_info
/usr/bin/ld: Verilator.o: unable to initialize decompress status for section .debug_info
/usr/bin/ld: Verilator.o: unable to initialize decompress status for section .debug_info
Verilator.o: file not recognized: File format not recognized
collect2: error: ld returned 1 exit status
make[2]: *** [../Makefile_obj:287: ../../bin/verilator_bin_dbg] Error 1
make[2]: Leaving directory '/tmp/root/spack-stage/spack-stage-verilator-4.210-u6zs26mdvynhr427nts6kbalrbilqyzt/spack-src/src/obj_dbg'
make[1]: *** [Makefile:66: ../bin/verilator_bin_dbg] Error 2
make[1]: Leaving directory '/tmp/root/spack-stage/spack-stage-verilator-4.210-u6zs26mdvynhr427nts6kbalrbilqyzt/spack-src/src'
make: *** [Makefile:233: verilator_exe] Error 2
==> Error: ProcessError: Command exited with status 2:
'make' '-j8'
4 errors found in build log:
144 /home/spack/spack/lib/spack/env/gcc/g++ -gz -static-libgcc -Xlinker -gc-sections -o ../../bin/verilator_bin_dbg Verilator.o V3Active.o V3ActiveTop.o V3Assert.o V3AssertPre.o V3Ast.o V3AstNod
es.o V3Begin.o V3Branch.o V3Broken.o V3CCtors.o V3CUse.o V3Case.o V3Cast.o V3Cdc.o V3Changed.o V3Class.o V3Clean.o V3Clock.o V3Combine.o V3Config.o V3Const__gen.o V3Coverage.o V3CoverageJoin.
o V3Dead.o V3Delayed.o V3Depth.o V3DepthBlock.o V3Descope.o V3DupFinder.o V3EmitC.o V3EmitCBase.o V3EmitCConstPool.o V3EmitCFunc.o V3EmitCInlines.o V3EmitCMain.o V3EmitCMake.o V3EmitCModel.o
V3EmitCSyms.o V3EmitMk.o V3EmitV.o V3EmitXml.o V3Error.o V3Expand.o V3File.o V3FileLine.o V3Gate.o V3GenClk.o V3Global.o V3Graph.o V3GraphAlg.o V3GraphAcyc.o V3GraphDfa.o V3GraphPathChecker.o
V3GraphTest.o V3Hash.o V3Hasher.o V3HierBlock.o V3Inline.o V3Inst.o V3InstrCount.o V3Life.o V3LifePost.o V3LinkCells.o V3LinkDot.o V3LinkJump.o V3LinkInc.o V3LinkLValue.o V3LinkLevel.o V3Lin
kParse.o V3LinkResolve.o V3Localize.o V3MergeCond.o V3Name.o V3Number.o V3OptionParser.o V3Options.o V3Order.o V3Os.o V3Param.o V3Partition.o V3PreShell.o V3Premit.o V3ProtectLib.o V3Randomiz
e.o V3Reloop.o V3Scope.o V3Scoreboard.o V3Slice.o V3Split.o V3SplitAs.o V3SplitVar.o V3Stats.o V3StatsReport.o V3String.o V3Subst.o V3Table.o V3Task.o V3Trace.o V3TraceDecl.o V3Tristate.o V3T
SP.o V3Undriven.o V3Unknown.o V3Unroll.o V3Waiver.o V3Width.o V3WidthSel.o V3ParseImp.o V3ParseGrammar.o V3ParseLex.o V3PreProc.o -lpthread -lm
145 /usr/bin/ld: VlcMain.o: unable to initialize decompress status for section .debug_info
146 /usr/bin/ld: VlcMain.o: unable to initialize decompress status for section .debug_info
147 /usr/bin/ld: VlcMain.o: unable to initialize decompress status for section .debug_info
148 /usr/bin/ld: VlcMain.o: unable to initialize decompress status for section .debug_info
149 VlcMain.o: file not recognized: File format not recognized
>> 150 collect2: error: ld returned 1 exit status
>> 151 make[2]: *** [../Makefile_obj:287: ../../bin/verilator_coverage_bin_dbg] Error 1
152 make[2]: Leaving directory '/tmp/root/spack-stage/spack-stage-verilator-4.210-u6zs26mdvynhr427nts6kbalrbilqyzt/spack-src/src/obj_dbg'
153 make[1]: *** [Makefile:70: ../bin/verilator_coverage_bin_dbg] Error 2
154 make[1]: *** Waiting for unfinished jobs....
155 /usr/bin/ld: Verilator.o: unable to initialize decompress status for section .debug_info
156 /usr/bin/ld: Verilator.o: unable to initialize decompress status for section .debug_info
157 /usr/bin/ld: Verilator.o: unable to initialize decompress status for section .debug_info
158 /usr/bin/ld: Verilator.o: unable to initialize decompress status for section .debug_info
159 Verilator.o: file not recognized: File format not recognized
>> 160 collect2: error: ld returned 1 exit status
>> 161 make[2]: *** [../Makefile_obj:287: ../../bin/verilator_bin_dbg] Error 1
162 make[2]: Leaving directory '/tmp/root/spack-stage/spack-stage-verilator-4.210-u6zs26mdvynhr427nts6kbalrbilqyzt/spack-src/src/obj_dbg'
163 make[1]: *** [Makefile:66: ../bin/verilator_bin_dbg] Error 2
164 make[1]: Leaving directory '/tmp/root/spack-stage/spack-stage-verilator-4.210-u6zs26mdvynhr427nts6kbalrbilqyzt/spack-src/src'
165 make: *** [Makefile:233: verilator_exe] Error 2
See build log for details:
/tmp/root/spack-stage/spack-stage-verilator-4.210-u6zs26mdvynhr427nts6kbalrbilqyzt/spack-build-out.txt
==> Error: Terminating after first install failure: ProcessError: Command exited with status 2:
'make' '-j8'
```
Can you help me analyze it?
### Information on your system
[root@centos8 ~]# spack debug report
* **Spack:** 0.16.1-1624-a0b5dcca3c
* **Python:** 3.6.8
* **Platform:** linux-centos8-aarch64
* **Concretizer:** original
",True,"when I make verilator@4.210 and it failed in spack - When I make verilator@4.210 and it failed in spack:
```console
[root@centos8 ~]# spack install -y -v --keep-stage --dont-restage --no-checksum --fail-fast verilator@4.210
[+] /home/spack/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/libiconv-1.16-5gvzkthgdyzwipoooaqkilcjetijwrqh
[+] /home/spack/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/pkgconf-1.7.3-qnwffdxtfpwlagzm34msmtobii7iksqf
[+] /home/spack/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/xz-5.2.5-5npvhv6x4hagkymz4dw2suvda5r6hpjk
[+] /home/spack/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/zlib-1.2.11-ysclwr6pk6lufuzqp3tf5la4lreyq7qt
[+] /home/spack/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/berkeley-db-18.1.40-vfyntuwixkzvr2rt2zjeijiqmtuu6brd
[+] /home/spack/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/libsigsegv-2.12-dkxnizjmb3vgeurskkjiysmjcldaaw7x
[+] /home/spack/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/diffutils-3.7-z6t4zfsla623dzpnpwebutiuoenvlt5d
[+] /home/spack/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/tar-1.32-q6qjhc42v4z7prpkjfsowtbjdceh7dbb
[+] /home/spack/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/ncurses-6.2-l6lhmwy4kpgkzfmgiiq2bikklnvulcyf
[+] /home/spack/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/libxml2-2.9.10-cn4eee7gqv7drycs6nybr5pxxlc5uivl
[+] /home/spack/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/m4-1.4.18-xjpzxna5lzr3g4gku6mjy5vz2jiyjvpi
[+] /home/spack/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/bzip2-1.0.8-urctfqbstx2mii7jxupeqeypgbrul37q
[+] /home/spack/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/readline-8.0-6otekrq5w5yaqh7xbi4to7hkewmnddl3
[+] /home/spack/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/libtool-2.4.6-rmp7rhxalhqckvkbsxiicnq5gedv5ygy
[+] /home/spack/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/gettext-0.21-pghfs6ximmdo3cyyhouf5jtwowtyngxf
[+] /home/spack/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/gdbm-1.18.1-sbfql4dkw5n2cri4z34km7dmrjres4fg
[+] /home/spack/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/perl-5.32.1-rs3gej7g7fp5pibnhenekowwbdfphevg
[+] /home/spack/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/help2man-1.47.16-x66avrn7sgvlx7anle7bf4lekqqf232z
[+] /home/spack/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/autoconf-2.69-rl7fity4bcttga7bffj7wisz673gg232
[+] /home/spack/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/texinfo-6.5-jcwiw2ljyr4uauolx7kxbhrshdprlvpj
[+] /home/spack/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/bison-3.7.6-krrkhqqfsuqtfhcnusuenmhz3ukilmqf
[+] /home/spack/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/automake-1.16.3-ycbmiqis6f652tvuz3vkysewhl2m6jky
[+] /home/spack/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/findutils-4.6.0-q3z2d45sn25fcrpvqbfo6nbqy4orlioo
[+] /home/spack/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/flex-2.6.4-qsqamz4edx44n4uqjl57m7trdwqm3zyd
==> Installing verilator-4.210-u6zs26mdvynhr427nts6kbalrbilqyzt
==> No binary for verilator-4.210-u6zs26mdvynhr427nts6kbalrbilqyzt found: installing from source
==> Using cached archive: /home/spack/spack/var/spack/cache/_source-cache/archive/2a/2a821f25e5766884e7c22076790810a386725df31ee9eac58862977b347e2018.tgz
==> No patches needed for verilator
==> verilator: Executing phase: 'autoreconf'
==> verilator: Executing phase: 'configure'
==> [2021-07-09-11:03:03.627507] '/tmp/root/spack-stage/spack-stage-verilator-4.210-u6zs26mdvynhr427nts6kbalrbilqyzt/spack-src/configure' '--prefix=/home/spack/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/verilator-4.210-u6zs26mdvynhr427nts6kbalrbilqyzt'
configuring for Verilator 4.210 2021-07-07
checking whether to perform partial static linking of Verilator binary... yes
checking whether to use tcmalloc... check
checking whether to use -m32... no
checking whether to build for coverage collection... no
checking whether to use hardcoded paths... yes
checking whether to show and stop on compilation warnings... no
checking whether to run long tests... no
checking for gcc... /home/spack/spack/lib/spack/env/gcc/gcc
checking whether the C compiler works... yes
checking for C compiler default output file name... a.out
checking for suffix of executables...
checking whether we are cross compiling... no
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether /home/spack/spack/lib/spack/env/gcc/gcc accepts -g... yes
checking for /home/spack/spack/lib/spack/env/gcc/gcc option to accept ISO C89... none needed
checking whether we are using the GNU C++ compiler... yes
checking whether /home/spack/spack/lib/spack/env/gcc/g++ accepts -g... yes
checking for a BSD-compatible install... /usr/bin/install -c
compiler is /home/spack/spack/lib/spack/env/gcc/g++ --version = g++ (GCC) 8.4.1 20200928 (Red Hat 8.4.1-1)
checking that C++ compiler can compile simple program... yes
checking for ar... ar
checking for perl... /home/spack/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/perl-5.32.1-rs3gej7g7fp5pibnhenekowwbdfphevg/bin/perl
checking for python3... /usr/bin/python3
checking for flex... /home/spack/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/flex-2.6.4-qsqamz4edx44n4uqjl57m7trdwqm3zyd/bin/flex
/home/spack/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/flex-2.6.4-qsqamz4edx44n4uqjl57m7trdwqm3zyd/bin/flex --version = flex 2.6.4
checking for bison... /home/spack/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/bison-3.7.6-krrkhqqfsuqtfhcnusuenmhz3ukilmqf/bin/bison
/home/spack/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/bison-3.7.6-krrkhqqfsuqtfhcnusuenmhz3ukilmqf/bin/bison --version = bison (GNU Bison) 3.7.6
checking for ccache... no
checking how to run the C++ preprocessor... /home/spack/spack/lib/spack/env/gcc/g++ -E
checking for grep that handles long lines and -e... /usr/bin/grep
checking for egrep... /usr/bin/grep -E
checking for ANSI C header files... yes
checking for sys/types.h... yes
checking for sys/stat.h... yes
checking for stdlib.h... yes
checking for string.h... yes
checking for memory.h... yes
checking for strings.h... yes
checking for inttypes.h... yes
checking for stdint.h... yes
checking for unistd.h... yes
checking for size_t... yes
checking for size_t... (cached) yes
checking for inline... inline
checking whether /home/spack/spack/lib/spack/env/gcc/g++ accepts -pg... yes
checking whether /home/spack/spack/lib/spack/env/gcc/g++ accepts -std=gnu++14... yes
checking whether /home/spack/spack/lib/spack/env/gcc/g++ accepts -std=c++03... yes
checking whether /home/spack/spack/lib/spack/env/gcc/g++ accepts -Wextra... yes
checking whether /home/spack/spack/lib/spack/env/gcc/g++ accepts -Wfloat-conversion... yes
checking whether /home/spack/spack/lib/spack/env/gcc/g++ accepts -Wlogical-op... yes
checking whether /home/spack/spack/lib/spack/env/gcc/g++ accepts -Wthread-safety... no
checking whether /home/spack/spack/lib/spack/env/gcc/g++ accepts -Qunused-arguments... no
checking whether /home/spack/spack/lib/spack/env/gcc/g++ accepts -faligned-new... yes
checking whether /home/spack/spack/lib/spack/env/gcc/g++ accepts -Wno-unused-parameter... yes
checking whether /home/spack/spack/lib/spack/env/gcc/g++ accepts -Wno-shadow... yes
checking whether /home/spack/spack/lib/spack/env/gcc/g++ accepts -Wno-char-subscripts... yes
checking whether /home/spack/spack/lib/spack/env/gcc/g++ accepts -Wno-null-conversion... no
checking whether /home/spack/spack/lib/spack/env/gcc/g++ accepts -Wno-parentheses-equality... no
checking whether /home/spack/spack/lib/spack/env/gcc/g++ accepts -Wno-unused... yes
checking whether /home/spack/spack/lib/spack/env/gcc/g++ accepts -Og... yes
checking whether /home/spack/spack/lib/spack/env/gcc/g++ accepts -ggdb... yes
checking whether /home/spack/spack/lib/spack/env/gcc/g++ accepts -gz... yes
checking whether /home/spack/spack/lib/spack/env/gcc/g++ linker accepts -gz... yes
checking whether /home/spack/spack/lib/spack/env/gcc/g++ accepts -faligned-new... yes
checking whether /home/spack/spack/lib/spack/env/gcc/g++ accepts -fbracket-depth=4096... no
checking whether /home/spack/spack/lib/spack/env/gcc/g++ accepts -fcf-protection=none... yes
checking whether /home/spack/spack/lib/spack/env/gcc/g++ accepts -mno-cet... no
checking whether /home/spack/spack/lib/spack/env/gcc/g++ accepts -Qunused-arguments... no
checking whether /home/spack/spack/lib/spack/env/gcc/g++ accepts -Wno-bool-operation... yes
checking whether /home/spack/spack/lib/spack/env/gcc/g++ accepts -Wno-tautological-bitwise-compare... no
checking whether /home/spack/spack/lib/spack/env/gcc/g++ accepts -Wno-parentheses-equality... no
checking whether /home/spack/spack/lib/spack/env/gcc/g++ accepts -Wno-sign-compare... yes
checking whether /home/spack/spack/lib/spack/env/gcc/g++ accepts -Wno-uninitialized... yes
checking whether /home/spack/spack/lib/spack/env/gcc/g++ accepts -Wno-unused-but-set-variable... yes
checking whether /home/spack/spack/lib/spack/env/gcc/g++ accepts -Wno-unused-parameter... yes
checking whether /home/spack/spack/lib/spack/env/gcc/g++ accepts -Wno-unused-variable... yes
checking whether /home/spack/spack/lib/spack/env/gcc/g++ accepts -Wno-shadow... yes
checking whether /home/spack/spack/lib/spack/env/gcc/g++ linker accepts -mt... no
checking whether /home/spack/spack/lib/spack/env/gcc/g++ linker accepts -pthread... yes
checking whether /home/spack/spack/lib/spack/env/gcc/g++ linker accepts -lpthread... yes
checking whether /home/spack/spack/lib/spack/env/gcc/g++ linker accepts -latomic... yes
checking whether /home/spack/spack/lib/spack/env/gcc/g++ linker accepts -static-libgcc... yes
checking whether /home/spack/spack/lib/spack/env/gcc/g++ linker accepts -static-libstdc++... no
checking whether /home/spack/spack/lib/spack/env/gcc/g++ linker accepts -Xlinker -gc-sections... yes
checking whether /home/spack/spack/lib/spack/env/gcc/g++ linker accepts -lpthread... yes
checking whether /home/spack/spack/lib/spack/env/gcc/g++ linker accepts -lbcrypt... no
checking whether /home/spack/spack/lib/spack/env/gcc/g++ linker accepts -lpsapi... no
checking whether /home/spack/spack/lib/spack/env/gcc/g++ linker accepts -l:libtcmalloc_minimal.a... no
checking whether /home/spack/spack/lib/spack/env/gcc/g++ supports C++11... yes
checking for struct stat.st_mtim.tv_nsec... yes
checking whether SystemC is found (in system path)... no
configure: creating ./config.status
config.status: creating Makefile
config.status: creating src/Makefile
config.status: creating src/Makefile_obj
config.status: creating include/verilated.mk
config.status: creating include/verilated_config.h
config.status: creating verilator.pc
config.status: creating verilator-config.cmake
config.status: creating verilator-config-version.cmake
config.status: creating src/config_build.h
config.status: src/config_build.h is unchanged
Now type 'make' (or sometimes 'gmake') to build Verilator.
==> verilator: Executing phase: 'build'
==> [2021-07-09-11:03:12.017110] 'make' '-j8'
------------------------------------------------------------
making verilator in src
make -C src
make[1]: Entering directory '/tmp/root/spack-stage/spack-stage-verilator-4.210-u6zs26mdvynhr427nts6kbalrbilqyzt/spack-src/src'
make -C obj_dbg -j 1 TGT=../../bin/verilator_bin_dbg VL_DEBUG=1 -f ../Makefile_obj serial
make -C obj_dbg TGT=../../bin/verilator_coverage_bin_dbg VL_DEBUG=1 VL_VLCOV=1 -f ../Makefile_obj serial_vlcov
make -C obj_opt -j 1 TGT=../../bin/verilator_bin -f ../Makefile_obj serial
make[2]: Entering directory '/tmp/root/spack-stage/spack-stage-verilator-4.210-u6zs26mdvynhr427nts6kbalrbilqyzt/spack-src/src'
make[2]: warning: -jN forced in submake: disabling jobserver mode.
make[2]: Entering directory '/tmp/root/spack-stage/spack-stage-verilator-4.210-u6zs26mdvynhr427nts6kbalrbilqyzt/spack-src/src'
make[2]: warning: -jN forced in submake: disabling jobserver mode.
make[2]: Nothing to be done for 'serial'.
make[2]: Leaving directory '/tmp/root/spack-stage/spack-stage-verilator-4.210-u6zs26mdvynhr427nts6kbalrbilqyzt/spack-src/src/obj_dbg'
make -C obj_dbg TGT=../../bin/verilator_bin_dbg VL_DEBUG=1 -f ../Makefile_obj
make[2]: Entering directory '/tmp/root/spack-stage/spack-stage-verilator-4.210-u6zs26mdvynhr427nts6kbalrbilqyzt/spack-src/src/obj_dbg'
make[2]: Nothing to be done for 'serial_vlcov'.
make[2]: Leaving directory '/tmp/root/spack-stage/spack-stage-verilator-4.210-u6zs26mdvynhr427nts6kbalrbilqyzt/spack-src/src/obj_dbg'
make -C obj_dbg TGT=../../bin/verilator_coverage_bin_dbg VL_DEBUG=1 VL_VLCOV=1 -f ../Makefile_obj
make[2]: Nothing to be done for 'serial'.
make[2]: Leaving directory '/tmp/root/spack-stage/spack-stage-verilator-4.210-u6zs26mdvynhr427nts6kbalrbilqyzt/spack-src/src/obj_opt'
make -C obj_opt TGT=../../bin/verilator_bin -f ../Makefile_obj
make[2]: Entering directory '/tmp/root/spack-stage/spack-stage-verilator-4.210-u6zs26mdvynhr427nts6kbalrbilqyzt/spack-src/src/obj_dbg'
make[2]: Entering directory '/tmp/root/spack-stage/spack-stage-verilator-4.210-u6zs26mdvynhr427nts6kbalrbilqyzt/spack-src/src/obj_dbg'
make[2]: Entering directory '/tmp/root/spack-stage/spack-stage-verilator-4.210-u6zs26mdvynhr427nts6kbalrbilqyzt/spack-src/src/obj_opt'
Compile flags: /home/spack/spack/lib/spack/env/gcc/g++ -Og -ggdb -gz -DVL_DEBUG -D_GLIBCXX_DEBUG -MMD -I. -I.. -I.. -I../../include -I../../include -MP -faligned-new -Wno-unused-parameter -Wno-shadow -DDEFENV_SYSTEMC="""" -DDEFENV_SYSTEMC_ARCH="""" -DDEFENV_SYSTEMC_INCLUDE="""" -DDEFENV_SYSTEMC_LIBDIR="""" -DDEFENV_VERILATOR_ROOT=""/home/spack/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/verilator-4.210-u6zs26mdvynhr427nts6kbalrbilqyzt/share/verilator""
Compile flags: /home/spack/spack/lib/spack/env/gcc/g++ -Og -ggdb -gz -DVL_DEBUG -D_GLIBCXX_DEBUG -MMD -I. -I.. -I.. -I../../include -I../../include -MP -faligned-new -Wno-unused-parameter -Wno-shadow -DDEFENV_SYSTEMC="""" -DDEFENV_SYSTEMC_ARCH="""" -DDEFENV_SYSTEMC_INCLUDE="""" -DDEFENV_SYSTEMC_LIBDIR="""" -DDEFENV_VERILATOR_ROOT=""/home/spack/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/verilator-4.210-u6zs26mdvynhr427nts6kbalrbilqyzt/share/verilator""
Compile flags: /home/spack/spack/lib/spack/env/gcc/g++ -O2 -MMD -I. -I.. -I.. -I../../include -I../../include -MP -faligned-new -Wno-unused-parameter -Wno-shadow -DDEFENV_SYSTEMC="""" -DDEFENV_SYSTEMC_ARCH="""" -DDEFENV_SYSTEMC_INCLUDE="""" -DDEFENV_SYSTEMC_LIBDIR="""" -DDEFENV_VERILATOR_ROOT=""/home/spack/spack/opt/spack/linux-centos8-aarch64/gcc-8.4.1/verilator-4.210-u6zs26mdvynhr427nts6kbalrbilqyzt/share/verilator""
Linking ../../bin/verilator_coverage_bin_dbg...
/home/spack/spack/lib/spack/env/gcc/g++ -gz -static-libgcc -Xlinker -gc-sections -o ../../bin/verilator_coverage_bin_dbg VlcMain.o -lpthread -lm
make[2]: Leaving directory '/tmp/root/spack-stage/spack-stage-verilator-4.210-u6zs26mdvynhr427nts6kbalrbilqyzt/spack-src/src/obj_opt'
Linking ../../bin/verilator_bin_dbg...
/home/spack/spack/lib/spack/env/gcc/g++ -gz -static-libgcc -Xlinker -gc-sections -o ../../bin/verilator_bin_dbg Verilator.o V3Active.o V3ActiveTop.o V3Assert.o V3AssertPre.o V3Ast.o V3AstNodes.o V3Begin.o V3Branch.o V3Broken.o V3CCtors.o V3CUse.o V3Case.o V3Cast.o V3Cdc.o V3Changed.o V3Class.o V3Clean.o V3Clock.o V3Combine.o V3Config.o V3Const__gen.o V3Coverage.o V3CoverageJoin.o V3Dead.o V3Delayed.o V3Depth.o V3DepthBlock.o V3Descope.o V3DupFinder.o V3EmitC.o V3EmitCBase.o V3EmitCConstPool.o V3EmitCFunc.o V3EmitCInlines.o V3EmitCMain.o V3EmitCMake.o V3EmitCModel.o V3EmitCSyms.o V3EmitMk.o V3EmitV.o V3EmitXml.o V3Error.o V3Expand.o V3File.o V3FileLine.o V3Gate.o V3GenClk.o V3Global.o V3Graph.o V3GraphAlg.o V3GraphAcyc.o V3GraphDfa.o V3GraphPathChecker.o V3GraphTest.o V3Hash.o V3Hasher.o V3HierBlock.o V3Inline.o V3Inst.o V3InstrCount.o V3Life.o V3LifePost.o V3LinkCells.o V3LinkDot.o V3LinkJump.o V3LinkInc.o V3LinkLValue.o V3LinkLevel.o V3LinkParse.o V3LinkResolve.o V3Localize.o V3MergeCond.o V3Name.o V3Number.o V3OptionParser.o V3Options.o V3Order.o V3Os.o V3Param.o V3Partition.o V3PreShell.o V3Premit.o V3ProtectLib.o V3Randomize.o V3Reloop.o V3Scope.o V3Scoreboard.o V3Slice.o V3Split.o V3SplitAs.o V3SplitVar.o V3Stats.o V3StatsReport.o V3String.o V3Subst.o V3Table.o V3Task.o V3Trace.o V3TraceDecl.o V3Tristate.o V3TSP.o V3Undriven.o V3Unknown.o V3Unroll.o V3Waiver.o V3Width.o V3WidthSel.o V3ParseImp.o V3ParseGrammar.o V3ParseLex.o V3PreProc.o -lpthread -lm
/usr/bin/ld: VlcMain.o: unable to initialize decompress status for section .debug_info
/usr/bin/ld: VlcMain.o: unable to initialize decompress status for section .debug_info
/usr/bin/ld: VlcMain.o: unable to initialize decompress status for section .debug_info
/usr/bin/ld: VlcMain.o: unable to initialize decompress status for section .debug_info
VlcMain.o: file not recognized: File format not recognized
collect2: error: ld returned 1 exit status
make[2]: *** [../Makefile_obj:287: ../../bin/verilator_coverage_bin_dbg] Error 1
make[2]: Leaving directory '/tmp/root/spack-stage/spack-stage-verilator-4.210-u6zs26mdvynhr427nts6kbalrbilqyzt/spack-src/src/obj_dbg'
make[1]: *** [Makefile:70: ../bin/verilator_coverage_bin_dbg] Error 2
make[1]: *** Waiting for unfinished jobs....
/usr/bin/ld: Verilator.o: unable to initialize decompress status for section .debug_info
/usr/bin/ld: Verilator.o: unable to initialize decompress status for section .debug_info
/usr/bin/ld: Verilator.o: unable to initialize decompress status for section .debug_info
/usr/bin/ld: Verilator.o: unable to initialize decompress status for section .debug_info
Verilator.o: file not recognized: File format not recognized
collect2: error: ld returned 1 exit status
make[2]: *** [../Makefile_obj:287: ../../bin/verilator_bin_dbg] Error 1
make[2]: Leaving directory '/tmp/root/spack-stage/spack-stage-verilator-4.210-u6zs26mdvynhr427nts6kbalrbilqyzt/spack-src/src/obj_dbg'
make[1]: *** [Makefile:66: ../bin/verilator_bin_dbg] Error 2
make[1]: Leaving directory '/tmp/root/spack-stage/spack-stage-verilator-4.210-u6zs26mdvynhr427nts6kbalrbilqyzt/spack-src/src'
make: *** [Makefile:233: verilator_exe] Error 2
==> Error: ProcessError: Command exited with status 2:
'make' '-j8'
4 errors found in build log:
144 /home/spack/spack/lib/spack/env/gcc/g++ -gz -static-libgcc -Xlinker -gc-sections -o ../../bin/verilator_bin_dbg Verilator.o V3Active.o V3ActiveTop.o V3Assert.o V3AssertPre.o V3Ast.o V3AstNod
es.o V3Begin.o V3Branch.o V3Broken.o V3CCtors.o V3CUse.o V3Case.o V3Cast.o V3Cdc.o V3Changed.o V3Class.o V3Clean.o V3Clock.o V3Combine.o V3Config.o V3Const__gen.o V3Coverage.o V3CoverageJoin.
o V3Dead.o V3Delayed.o V3Depth.o V3DepthBlock.o V3Descope.o V3DupFinder.o V3EmitC.o V3EmitCBase.o V3EmitCConstPool.o V3EmitCFunc.o V3EmitCInlines.o V3EmitCMain.o V3EmitCMake.o V3EmitCModel.o
V3EmitCSyms.o V3EmitMk.o V3EmitV.o V3EmitXml.o V3Error.o V3Expand.o V3File.o V3FileLine.o V3Gate.o V3GenClk.o V3Global.o V3Graph.o V3GraphAlg.o V3GraphAcyc.o V3GraphDfa.o V3GraphPathChecker.o
V3GraphTest.o V3Hash.o V3Hasher.o V3HierBlock.o V3Inline.o V3Inst.o V3InstrCount.o V3Life.o V3LifePost.o V3LinkCells.o V3LinkDot.o V3LinkJump.o V3LinkInc.o V3LinkLValue.o V3LinkLevel.o V3Lin
kParse.o V3LinkResolve.o V3Localize.o V3MergeCond.o V3Name.o V3Number.o V3OptionParser.o V3Options.o V3Order.o V3Os.o V3Param.o V3Partition.o V3PreShell.o V3Premit.o V3ProtectLib.o V3Randomiz
e.o V3Reloop.o V3Scope.o V3Scoreboard.o V3Slice.o V3Split.o V3SplitAs.o V3SplitVar.o V3Stats.o V3StatsReport.o V3String.o V3Subst.o V3Table.o V3Task.o V3Trace.o V3TraceDecl.o V3Tristate.o V3T
SP.o V3Undriven.o V3Unknown.o V3Unroll.o V3Waiver.o V3Width.o V3WidthSel.o V3ParseImp.o V3ParseGrammar.o V3ParseLex.o V3PreProc.o -lpthread -lm
145 /usr/bin/ld: VlcMain.o: unable to initialize decompress status for section .debug_info
146 /usr/bin/ld: VlcMain.o: unable to initialize decompress status for section .debug_info
147 /usr/bin/ld: VlcMain.o: unable to initialize decompress status for section .debug_info
148 /usr/bin/ld: VlcMain.o: unable to initialize decompress status for section .debug_info
149 VlcMain.o: file not recognized: File format not recognized
>> 150 collect2: error: ld returned 1 exit status
>> 151 make[2]: *** [../Makefile_obj:287: ../../bin/verilator_coverage_bin_dbg] Error 1
152 make[2]: Leaving directory '/tmp/root/spack-stage/spack-stage-verilator-4.210-u6zs26mdvynhr427nts6kbalrbilqyzt/spack-src/src/obj_dbg'
153 make[1]: *** [Makefile:70: ../bin/verilator_coverage_bin_dbg] Error 2
154 make[1]: *** Waiting for unfinished jobs....
155 /usr/bin/ld: Verilator.o: unable to initialize decompress status for section .debug_info
156 /usr/bin/ld: Verilator.o: unable to initialize decompress status for section .debug_info
157 /usr/bin/ld: Verilator.o: unable to initialize decompress status for section .debug_info
158 /usr/bin/ld: Verilator.o: unable to initialize decompress status for section .debug_info
159 Verilator.o: file not recognized: File format not recognized
>> 160 collect2: error: ld returned 1 exit status
>> 161 make[2]: *** [../Makefile_obj:287: ../../bin/verilator_bin_dbg] Error 1
162 make[2]: Leaving directory '/tmp/root/spack-stage/spack-stage-verilator-4.210-u6zs26mdvynhr427nts6kbalrbilqyzt/spack-src/src/obj_dbg'
163 make[1]: *** [Makefile:66: ../bin/verilator_bin_dbg] Error 2
164 make[1]: Leaving directory '/tmp/root/spack-stage/spack-stage-verilator-4.210-u6zs26mdvynhr427nts6kbalrbilqyzt/spack-src/src'
165 make: *** [Makefile:233: verilator_exe] Error 2
See build log for details:
/tmp/root/spack-stage/spack-stage-verilator-4.210-u6zs26mdvynhr427nts6kbalrbilqyzt/spack-build-out.txt
==> Error: Terminating after first install failure: ProcessError: Command exited with status 2:
'make' '-j8'
```
Can you help me analyze it?
### Information on your system
[root@centos8 ~]# spack debug report
* **Spack:** 0.16.1-1624-a0b5dcca3c
* **Python:** 3.6.8
* **Platform:** linux-centos8-aarch64
* **Concretizer:** original
",1,when i make verilator and it failed in spack when i make verilator and it failed in spack console spack install y v keep stage dont restage no checksum fail fast verilator home spack spack opt spack linux gcc libiconv home spack spack opt spack linux gcc pkgconf home spack spack opt spack linux gcc xz home spack spack opt spack linux gcc zlib home spack spack opt spack linux gcc berkeley db home spack spack opt spack linux gcc libsigsegv home spack spack opt spack linux gcc diffutils home spack spack opt spack linux gcc tar home spack spack opt spack linux gcc ncurses home spack spack opt spack linux gcc home spack spack opt spack linux gcc home spack spack opt spack linux gcc home spack spack opt spack linux gcc readline home spack spack opt spack linux gcc libtool home spack spack opt spack linux gcc gettext home spack spack opt spack linux gcc gdbm home spack spack opt spack linux gcc perl home spack spack opt spack linux gcc home spack spack opt spack linux gcc autoconf home spack spack opt spack linux gcc texinfo home spack spack opt spack linux gcc bison home spack spack opt spack linux gcc automake home spack spack opt spack linux gcc findutils home spack spack opt spack linux gcc flex installing verilator no binary for verilator found installing from source using cached archive home spack spack var spack cache source cache archive tgz no patches needed for verilator verilator executing phase autoreconf verilator executing phase configure tmp root spack stage spack stage verilator spack src configure prefix home spack spack opt spack linux gcc verilator configuring for verilator checking whether to perform partial static linking of verilator binary yes checking whether to use tcmalloc check checking whether to use no checking whether to build for coverage collection no checking whether to use hardcoded paths yes checking whether to show and stop on compilation warnings no checking whether to run long tests no checking for gcc home spack spack lib spack env gcc gcc checking whether the c compiler works yes checking for c compiler default output file name a out checking for suffix of executables checking whether we are cross compiling no checking for suffix of object files o checking whether we are using the gnu c compiler yes checking whether home spack spack lib spack env gcc gcc accepts g yes checking for home spack spack lib spack env gcc gcc option to accept iso none needed checking whether we are using the gnu c compiler yes checking whether home spack spack lib spack env gcc g accepts g yes checking for a bsd compatible install usr bin install c compiler is home spack spack lib spack env gcc g version g gcc red hat checking that c compiler can compile simple program yes checking for ar ar checking for perl home spack spack opt spack linux gcc perl bin perl checking for usr bin checking for flex home spack spack opt spack linux gcc flex bin flex home spack spack opt spack linux gcc flex bin flex version flex checking for bison home spack spack opt spack linux gcc bison bin bison home spack spack opt spack linux gcc bison bin bison version bison gnu bison checking for ccache no checking how to run the c preprocessor home spack spack lib spack env gcc g e checking for grep that handles long lines and e usr bin grep checking for egrep usr bin grep e checking for ansi c header files yes checking for sys types h yes checking for sys stat h yes checking for stdlib h yes checking for string h yes checking for memory h yes checking for strings h yes checking for inttypes h yes checking for stdint h yes checking for unistd h yes checking for size t yes checking for size t cached yes checking for inline inline checking whether home spack spack lib spack env gcc g accepts pg yes checking whether home spack spack lib spack env gcc g accepts std gnu yes checking whether home spack spack lib spack env gcc g accepts std c yes checking whether home spack spack lib spack env gcc g accepts wextra yes checking whether home spack spack lib spack env gcc g accepts wfloat conversion yes checking whether home spack spack lib spack env gcc g accepts wlogical op yes checking whether home spack spack lib spack env gcc g accepts wthread safety no checking whether home spack spack lib spack env gcc g accepts qunused arguments no checking whether home spack spack lib spack env gcc g accepts faligned new yes checking whether home spack spack lib spack env gcc g accepts wno unused parameter yes checking whether home spack spack lib spack env gcc g accepts wno shadow yes checking whether home spack spack lib spack env gcc g accepts wno char subscripts yes checking whether home spack spack lib spack env gcc g accepts wno null conversion no checking whether home spack spack lib spack env gcc g accepts wno parentheses equality no checking whether home spack spack lib spack env gcc g accepts wno unused yes checking whether home spack spack lib spack env gcc g accepts og yes checking whether home spack spack lib spack env gcc g accepts ggdb yes checking whether home spack spack lib spack env gcc g accepts gz yes checking whether home spack spack lib spack env gcc g linker accepts gz yes checking whether home spack spack lib spack env gcc g accepts faligned new yes checking whether home spack spack lib spack env gcc g accepts fbracket depth no checking whether home spack spack lib spack env gcc g accepts fcf protection none yes checking whether home spack spack lib spack env gcc g accepts mno cet no checking whether home spack spack lib spack env gcc g accepts qunused arguments no checking whether home spack spack lib spack env gcc g accepts wno bool operation yes checking whether home spack spack lib spack env gcc g accepts wno tautological bitwise compare no checking whether home spack spack lib spack env gcc g accepts wno parentheses equality no checking whether home spack spack lib spack env gcc g accepts wno sign compare yes checking whether home spack spack lib spack env gcc g accepts wno uninitialized yes checking whether home spack spack lib spack env gcc g accepts wno unused but set variable yes checking whether home spack spack lib spack env gcc g accepts wno unused parameter yes checking whether home spack spack lib spack env gcc g accepts wno unused variable yes checking whether home spack spack lib spack env gcc g accepts wno shadow yes checking whether home spack spack lib spack env gcc g linker accepts mt no checking whether home spack spack lib spack env gcc g linker accepts pthread yes checking whether home spack spack lib spack env gcc g linker accepts lpthread yes checking whether home spack spack lib spack env gcc g linker accepts latomic yes checking whether home spack spack lib spack env gcc g linker accepts static libgcc yes checking whether home spack spack lib spack env gcc g linker accepts static libstdc no checking whether home spack spack lib spack env gcc g linker accepts xlinker gc sections yes checking whether home spack spack lib spack env gcc g linker accepts lpthread yes checking whether home spack spack lib spack env gcc g linker accepts lbcrypt no checking whether home spack spack lib spack env gcc g linker accepts lpsapi no checking whether home spack spack lib spack env gcc g linker accepts l libtcmalloc minimal a no checking whether home spack spack lib spack env gcc g supports c yes checking for struct stat st mtim tv nsec yes checking whether systemc is found in system path no configure creating config status config status creating makefile config status creating src makefile config status creating src makefile obj config status creating include verilated mk config status creating include verilated config h config status creating verilator pc config status creating verilator config cmake config status creating verilator config version cmake config status creating src config build h config status src config build h is unchanged now type make or sometimes gmake to build verilator verilator executing phase build make making verilator in src make c src make entering directory tmp root spack stage spack stage verilator spack src src make c obj dbg j tgt bin verilator bin dbg vl debug f makefile obj serial make c obj dbg tgt bin verilator coverage bin dbg vl debug vl vlcov f makefile obj serial vlcov make c obj opt j tgt bin verilator bin f makefile obj serial make entering directory tmp root spack stage spack stage verilator spack src src make warning jn forced in submake disabling jobserver mode make entering directory tmp root spack stage spack stage verilator spack src src make warning jn forced in submake disabling jobserver mode make nothing to be done for serial make leaving directory tmp root spack stage spack stage verilator spack src src obj dbg make c obj dbg tgt bin verilator bin dbg vl debug f makefile obj make entering directory tmp root spack stage spack stage verilator spack src src obj dbg make nothing to be done for serial vlcov make leaving directory tmp root spack stage spack stage verilator spack src src obj dbg make c obj dbg tgt bin verilator coverage bin dbg vl debug vl vlcov f makefile obj make nothing to be done for serial make leaving directory tmp root spack stage spack stage verilator spack src src obj opt make c obj opt tgt bin verilator bin f makefile obj make entering directory tmp root spack stage spack stage verilator spack src src obj dbg make entering directory tmp root spack stage spack stage verilator spack src src obj dbg make entering directory tmp root spack stage spack stage verilator spack src src obj opt compile flags home spack spack lib spack env gcc g og ggdb gz dvl debug d glibcxx debug mmd i i i i include i include mp faligned new wno unused parameter wno shadow ddefenv systemc ddefenv systemc arch ddefenv systemc include ddefenv systemc libdir ddefenv verilator root home spack spack opt spack linux gcc verilator share verilator compile flags home spack spack lib spack env gcc g og ggdb gz dvl debug d glibcxx debug mmd i i i i include i include mp faligned new wno unused parameter wno shadow ddefenv systemc ddefenv systemc arch ddefenv systemc include ddefenv systemc libdir ddefenv verilator root home spack spack opt spack linux gcc verilator share verilator compile flags home spack spack lib spack env gcc g mmd i i i i include i include mp faligned new wno unused parameter wno shadow ddefenv systemc ddefenv systemc arch ddefenv systemc include ddefenv systemc libdir ddefenv verilator root home spack spack opt spack linux gcc verilator share verilator linking bin verilator coverage bin dbg home spack spack lib spack env gcc g gz static libgcc xlinker gc sections o bin verilator coverage bin dbg vlcmain o lpthread lm make leaving directory tmp root spack stage spack stage verilator spack src src obj opt linking bin verilator bin dbg home spack spack lib spack env gcc g gz static libgcc xlinker gc sections o bin verilator bin dbg verilator o o o o o o o o o o o o o o o o o o o o o gen o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o lpthread lm usr bin ld vlcmain o unable to initialize decompress status for section debug info usr bin ld vlcmain o unable to initialize decompress status for section debug info usr bin ld vlcmain o unable to initialize decompress status for section debug info usr bin ld vlcmain o unable to initialize decompress status for section debug info vlcmain o file not recognized file format not recognized error ld returned exit status make error make leaving directory tmp root spack stage spack stage verilator spack src src obj dbg make error make waiting for unfinished jobs usr bin ld verilator o unable to initialize decompress status for section debug info usr bin ld verilator o unable to initialize decompress status for section debug info usr bin ld verilator o unable to initialize decompress status for section debug info usr bin ld verilator o unable to initialize decompress status for section debug info verilator o file not recognized file format not recognized error ld returned exit status make error make leaving directory tmp root spack stage spack stage verilator spack src src obj dbg make error make leaving directory tmp root spack stage spack stage verilator spack src src make error error processerror command exited with status make errors found in build log home spack spack lib spack env gcc g gz static libgcc xlinker gc sections o bin verilator bin dbg verilator o o o o o o es o o o o o o o o o o o o o o o gen o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o kparse o o o o o o o o o o o o o o o e o o o o o o o o o o o o o o o o o sp o o o o o o o o o o o lpthread lm usr bin ld vlcmain o unable to initialize decompress status for section debug info usr bin ld vlcmain o unable to initialize decompress status for section debug info usr bin ld vlcmain o unable to initialize decompress status for section debug info usr bin ld vlcmain o unable to initialize decompress status for section debug info vlcmain o file not recognized file format not recognized error ld returned exit status make error make leaving directory tmp root spack stage spack stage verilator spack src src obj dbg make error make waiting for unfinished jobs usr bin ld verilator o unable to initialize decompress status for section debug info usr bin ld verilator o unable to initialize decompress status for section debug info usr bin ld verilator o unable to initialize decompress status for section debug info usr bin ld verilator o unable to initialize decompress status for section debug info verilator o file not recognized file format not recognized error ld returned exit status make error make leaving directory tmp root spack stage spack stage verilator spack src src obj dbg make error make leaving directory tmp root spack stage spack stage verilator spack src src make error see build log for details tmp root spack stage spack stage verilator spack build out txt error terminating after first install failure processerror command exited with status make can you help me analyze it information on your system spack debug report spack python platform linux concretizer original ,1
1862,27576668123.0,IssuesEvent,2023-03-08 13:31:32,elastic/kibana,https://api.github.com/repos/elastic/kibana,closed,[Guided onboarding] A11y improvements ,enhancement Supportability Team:Journey/Onboarding,We need to review the a11y of the guided onboarding panel and try to fix any issues. [Here](https://eui.elastic.co/pr_6247/#/utilities/portal#a-custom-flyout) is an example of an accessible custom flyout. Also see https://github.com/elastic/eui/pull/6247 ,True,[Guided onboarding] A11y improvements - We need to review the a11y of the guided onboarding panel and try to fix any issues. [Here](https://eui.elastic.co/pr_6247/#/utilities/portal#a-custom-flyout) is an example of an accessible custom flyout. Also see https://github.com/elastic/eui/pull/6247 ,1, improvements we need to review the of the guided onboarding panel and try to fix any issues is an example of an accessible custom flyout also see ,1
1354,19403352884.0,IssuesEvent,2021-12-19 15:26:53,codenjoyme/codenjoy,https://api.github.com/repos/codenjoyme/codenjoy,closed,[windows][portable] Used clients approach to build server,p-clients p-windows-portable,"Старый скрипт стоит переписать, так же как это сейчас делается для всех клиентов",True,"[windows][portable] Used clients approach to build server - Старый скрипт стоит переписать, так же как это сейчас делается для всех клиентов",1, used clients approach to build server старый скрипт стоит переписать так же как это сейчас делается для всех клиентов,1
34020,7779775834.0,IssuesEvent,2018-06-05 17:53:53,volution/vonuvoli-scheme,https://api.github.com/repos/volution/vonuvoli-scheme,opened,Parser -- enhance error reporting of parsing errors,code-parser implementation,"## Tasks
* [ ] add line-numbers in transcript output;
* [ ] highlight line in transcript output;
* [ ] show only a few lines of the source code in the transcript output;
",1.0,"Parser -- enhance error reporting of parsing errors - ## Tasks
* [ ] add line-numbers in transcript output;
* [ ] highlight line in transcript output;
* [ ] show only a few lines of the source code in the transcript output;
",0,parser enhance error reporting of parsing errors tasks add line numbers in transcript output highlight line in transcript output show only a few lines of the source code in the transcript output ,0
233257,18957552565.0,IssuesEvent,2021-11-18 22:20:01,microsoft/vscode-python,https://api.github.com/repos/microsoft/vscode-python,closed,Prompt asking if you want to install pytest/nose is displayed even though the framework is already being installed,feature-request needs PR area-testing,"We should wait for the user to click on ""Yes"" to actually install the selected framework (after running Configure, Run or Discover Tests command)
",1.0,"Prompt asking if you want to install pytest/nose is displayed even though the framework is already being installed - We should wait for the user to click on ""Yes"" to actually install the selected framework (after running Configure, Run or Discover Tests command)
",0,prompt asking if you want to install pytest nose is displayed even though the framework is already being installed we should wait for the user to click on yes to actually install the selected framework after running configure run or discover tests command ,0
49535,6032102868.0,IssuesEvent,2017-06-09 02:08:25,ampproject/amphtml,https://api.github.com/repos/ampproject/amphtml,closed,CSS Keyframe test flake,P1: High Priority Related to: Flaky Tests,"```
Chrome 59.0.3071 (Linux 0.0.0) extractKeyframes discovery should scan in media CSS FAILED
AssertionError: expected JSON to be equal.
Exp: '[{""offset"":0,""opacity"":""0""},{""offset"":1,""opacity"":""0.2""}]'
Act: '[{""offset"":0,""opacity"":""0""},{""offset"":1,""opacity"":""0.1""}]'
at /home/travis/build/ampproject/amphtml/extensions/amp-animation/0.1/test/test-keyframes-extractor.js:183:29 <- /tmp/0d247d577d808d33bae951fe28e04ea8.browserify:57424:30
at
```",1.0,"CSS Keyframe test flake - ```
Chrome 59.0.3071 (Linux 0.0.0) extractKeyframes discovery should scan in media CSS FAILED
AssertionError: expected JSON to be equal.
Exp: '[{""offset"":0,""opacity"":""0""},{""offset"":1,""opacity"":""0.2""}]'
Act: '[{""offset"":0,""opacity"":""0""},{""offset"":1,""opacity"":""0.1""}]'
at /home/travis/build/ampproject/amphtml/extensions/amp-animation/0.1/test/test-keyframes-extractor.js:183:29 <- /tmp/0d247d577d808d33bae951fe28e04ea8.browserify:57424:30
at
```",0,css keyframe test flake chrome linux extractkeyframes discovery should scan in media css failed assertionerror expected json to be equal exp act at home travis build ampproject amphtml extensions amp animation test test keyframes extractor js tmp browserify at ,0
22029,11660555649.0,IssuesEvent,2020-03-03 03:44:53,cityofaustin/atd-data-tech,https://api.github.com/repos/cityofaustin/atd-data-tech,opened,Find someone in law dept regarding legal aspects of crashes,Product: Vision Zero Crash Data System Project: Vision Zero Crash Data System Service: PM Type: Meeting Type: Research Workgroup: CTM Workgroup: VZ migrated,"what data (public or not) that might be available regarding scooter crashes.
insurance?
court filings?
- https://www.citylab.com/transportation/2019/01/scooter-crash-accidents-safety-liability-bird-lime/577687/
*Migrated from [atd-vz-data #5](https://github.com/cityofaustin/atd-vz-data/issues/5)*",1.0,"Find someone in law dept regarding legal aspects of crashes - what data (public or not) that might be available regarding scooter crashes.
insurance?
court filings?
- https://www.citylab.com/transportation/2019/01/scooter-crash-accidents-safety-liability-bird-lime/577687/
*Migrated from [atd-vz-data #5](https://github.com/cityofaustin/atd-vz-data/issues/5)*",0,find someone in law dept regarding legal aspects of crashes what data public or not that might be available regarding scooter crashes insurance court filings migrated from ,0
142250,11460227424.0,IssuesEvent,2020-02-07 09:18:25,bogdanpolak/command-delphi,https://api.github.com/repos/bogdanpolak/command-delphi,opened,PoC - Build test suite for TAsyncCommand,unit tests,"Build prototype test suite for TAsyncCommand. Problems:
- New to wait until thread will finish
- Move fThread into protected if it will be required ",1.0,"PoC - Build test suite for TAsyncCommand - Build prototype test suite for TAsyncCommand. Problems:
- New to wait until thread will finish
- Move fThread into protected if it will be required ",0,poc build test suite for tasynccommand build prototype test suite for tasynccommand problems new to wait until thread will finish move fthread into protected if it will be required ,0
1412,20986824926.0,IssuesEvent,2022-03-29 04:43:44,MicrosoftDocs/azure-docs,https://api.github.com/repos/MicrosoftDocs/azure-docs,closed,https://github.com/MicrosoftDocs/azure-docs/issues/81994 NOT IMPLEMENTED,azure-supportability/svc triaged cxp doc-enhancement Pri2,"Can you please add a feedback stating that there is no cost associated with increasing quota only cost associated with utilizing the same. I have been getting a lot of partner requests stating if there is a price associated with quota increase. A one liner should provide clarification on the same
I created a previous issue # https://github.com/MicrosoftDocs/azure-docs/issues/81994 which stated that this will be addressed but I dont see the statement. Can we please add this to the document to reduce the churn
[Enter feedback here]
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: e19f68c5-3917-c131-a379-3b9e3156593b
* Version Independent ID: 89f78479-edd5-f96c-b342-31e43ef72c92
* Content: [Increase regional vCPU quotas - Azure supportability](https://docs.microsoft.com/en-us/azure/azure-portal/supportability/regional-quota-requests#request-a-quota-increase-by-region-from-subscriptions)
* Content Source: [articles/azure-portal/supportability/regional-quota-requests.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/azure-portal/supportability/regional-quota-requests.md)
* Service: **azure-supportability**
* GitHub Login: @JnHs
* Microsoft Alias: **jenhayes**",True,"https://github.com/MicrosoftDocs/azure-docs/issues/81994 NOT IMPLEMENTED - Can you please add a feedback stating that there is no cost associated with increasing quota only cost associated with utilizing the same. I have been getting a lot of partner requests stating if there is a price associated with quota increase. A one liner should provide clarification on the same
I created a previous issue # https://github.com/MicrosoftDocs/azure-docs/issues/81994 which stated that this will be addressed but I dont see the statement. Can we please add this to the document to reduce the churn
[Enter feedback here]
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: e19f68c5-3917-c131-a379-3b9e3156593b
* Version Independent ID: 89f78479-edd5-f96c-b342-31e43ef72c92
* Content: [Increase regional vCPU quotas - Azure supportability](https://docs.microsoft.com/en-us/azure/azure-portal/supportability/regional-quota-requests#request-a-quota-increase-by-region-from-subscriptions)
* Content Source: [articles/azure-portal/supportability/regional-quota-requests.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/azure-portal/supportability/regional-quota-requests.md)
* Service: **azure-supportability**
* GitHub Login: @JnHs
* Microsoft Alias: **jenhayes**",1, not implemented can you please add a feedback stating that there is no cost associated with increasing quota only cost associated with utilizing the same i have been getting a lot of partner requests stating if there is a price associated with quota increase a one liner should provide clarification on the same i created a previous issue which stated that this will be addressed but i dont see the statement can we please add this to the document to reduce the churn document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service azure supportability github login jnhs microsoft alias jenhayes ,1
131092,18214669904.0,IssuesEvent,2021-09-30 01:40:55,mgh3326/google-calendar-slackbot,https://api.github.com/repos/mgh3326/google-calendar-slackbot,opened,CVE-2021-37136 (High) detected in netty-codec-4.1.50.Final.jar,security vulnerability,"## CVE-2021-37136 - High Severity Vulnerability
Vulnerable Library - netty-codec-4.1.50.Final.jar
Netty is an asynchronous event-driven network application framework for
rapid development of maintainable high performance protocol servers and
clients.
Path to dependency file: google-calendar-slackbot/build.gradle
Path to vulnerable library: /root/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec/4.1.50.Final/cbcb646c9380c6cdc3f56603ae6418a11418ce0f/netty-codec-4.1.50.Final.jar,/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec/4.1.50.Final/cbcb646c9380c6cdc3f56603ae6418a11418ce0f/netty-codec-4.1.50.Final.jar
Dependency Hierarchy:
- spring-boot-devtools-2.3.1.RELEASE (Root Library)
- spring-boot-dependencies-2.3.1.RELEASE
- :x: **netty-codec-4.1.50.Final.jar** (Vulnerable Library)
Vulnerability Details
The Bzip2 decompression decoder function doesn't allow setting size restrictions on the decompressed output data (which affects the allocation size used during decompression).
All users of Bzip2Decoder are affected. The malicious input can trigger an OOME and so a DoS attack
Publish Date: 2021-07-21
URL: CVE-2021-37136
CVSS 3 Score Details (7.5 )
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
For more information on CVSS3 Scores, click here .
Suggested Fix
Type: Upgrade version
Origin: https://github.com/advisories/GHSA-grg4-wf29-r9vv
Release Date: 2021-07-21
Fix Resolution: io.netty:netty-codec:4.1.68.Final
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2021-37136 (High) detected in netty-codec-4.1.50.Final.jar - ## CVE-2021-37136 - High Severity Vulnerability
Vulnerable Library - netty-codec-4.1.50.Final.jar
Netty is an asynchronous event-driven network application framework for
rapid development of maintainable high performance protocol servers and
clients.
Path to dependency file: google-calendar-slackbot/build.gradle
Path to vulnerable library: /root/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec/4.1.50.Final/cbcb646c9380c6cdc3f56603ae6418a11418ce0f/netty-codec-4.1.50.Final.jar,/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec/4.1.50.Final/cbcb646c9380c6cdc3f56603ae6418a11418ce0f/netty-codec-4.1.50.Final.jar
Dependency Hierarchy:
- spring-boot-devtools-2.3.1.RELEASE (Root Library)
- spring-boot-dependencies-2.3.1.RELEASE
- :x: **netty-codec-4.1.50.Final.jar** (Vulnerable Library)
Vulnerability Details
The Bzip2 decompression decoder function doesn't allow setting size restrictions on the decompressed output data (which affects the allocation size used during decompression).
All users of Bzip2Decoder are affected. The malicious input can trigger an OOME and so a DoS attack
Publish Date: 2021-07-21
URL: CVE-2021-37136
CVSS 3 Score Details (7.5 )
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
For more information on CVSS3 Scores, click here .
Suggested Fix
Type: Upgrade version
Origin: https://github.com/advisories/GHSA-grg4-wf29-r9vv
Release Date: 2021-07-21
Fix Resolution: io.netty:netty-codec:4.1.68.Final
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve high detected in netty codec final jar cve high severity vulnerability vulnerable library netty codec final jar netty is an asynchronous event driven network application framework for rapid development of maintainable high performance protocol servers and clients path to dependency file google calendar slackbot build gradle path to vulnerable library root gradle caches modules files io netty netty codec final netty codec final jar root gradle caches modules files io netty netty codec final netty codec final jar dependency hierarchy spring boot devtools release root library spring boot dependencies release x netty codec final jar vulnerable library vulnerability details the decompression decoder function doesn t allow setting size restrictions on the decompressed output data which affects the allocation size used during decompression all users of are affected the malicious input can trigger an oome and so a dos attack publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution io netty netty codec final step up your open source security game with whitesource ,0
33413,15930134978.0,IssuesEvent,2021-04-14 00:08:53,tarantool/tarantool,https://api.github.com/repos/tarantool/tarantool,opened,The connections are pinned to iproto threads leading to uneven load and zero requests balancing,bug iproto performance,"In the new feature `box.cfg.iproto_threads` the connections are pinned to the thread which managed to `accept()` them first. It might work fine for short-living connections and for small and fast requests. But leads to an obvious issue of the feature being hardly usable for long-living connections, and when some connections are much heavier than the others. Because it might and will happen on a long-living instance, that some threads are loaded much more than the others - the other threads can't take any part of their load since the connections are pinned.
That is the case at least for vshard. Bucket discovery might be a heavy operation when it is aggressive, and when people make it even more aggressive by tweaking timeouts. Also it is the case for the rebalancer. When it is started on a big memtx cluster, the buckets are being sent very aggressively. I know that on some installations the rebalancer might use CPU close to 100% when there is not much other work. The bucket sending happens in a single connection using multiple fibers, which makes it quite heavy and long (hours, or days if there were errors in the middle, or when there are sharded vinyl spaces).",True,"The connections are pinned to iproto threads leading to uneven load and zero requests balancing - In the new feature `box.cfg.iproto_threads` the connections are pinned to the thread which managed to `accept()` them first. It might work fine for short-living connections and for small and fast requests. But leads to an obvious issue of the feature being hardly usable for long-living connections, and when some connections are much heavier than the others. Because it might and will happen on a long-living instance, that some threads are loaded much more than the others - the other threads can't take any part of their load since the connections are pinned.
That is the case at least for vshard. Bucket discovery might be a heavy operation when it is aggressive, and when people make it even more aggressive by tweaking timeouts. Also it is the case for the rebalancer. When it is started on a big memtx cluster, the buckets are being sent very aggressively. I know that on some installations the rebalancer might use CPU close to 100% when there is not much other work. The bucket sending happens in a single connection using multiple fibers, which makes it quite heavy and long (hours, or days if there were errors in the middle, or when there are sharded vinyl spaces).",0,the connections are pinned to iproto threads leading to uneven load and zero requests balancing in the new feature box cfg iproto threads the connections are pinned to the thread which managed to accept them first it might work fine for short living connections and for small and fast requests but leads to an obvious issue of the feature being hardly usable for long living connections and when some connections are much heavier than the others because it might and will happen on a long living instance that some threads are loaded much more than the others the other threads can t take any part of their load since the connections are pinned that is the case at least for vshard bucket discovery might be a heavy operation when it is aggressive and when people make it even more aggressive by tweaking timeouts also it is the case for the rebalancer when it is started on a big memtx cluster the buckets are being sent very aggressively i know that on some installations the rebalancer might use cpu close to when there is not much other work the bucket sending happens in a single connection using multiple fibers which makes it quite heavy and long hours or days if there were errors in the middle or when there are sharded vinyl spaces ,0
1864,27585460712.0,IssuesEvent,2023-03-08 19:23:13,golang/vulndb,https://api.github.com/repos/golang/vulndb,closed,x/vulndb: potential Go vuln in github.com/answerdev/answer: GHSA-qrwm-xqfr-4vhv,excluded: NOT_IMPORTABLE,"In GitHub Security Advisory [GHSA-qrwm-xqfr-4vhv](https://github.com/advisories/GHSA-qrwm-xqfr-4vhv), there is a vulnerability in the following Go packages or modules:
| Unit | Fixed | Vulnerable Ranges |
| - | - | - |
| [github.com/answerdev/answer](https://pkg.go.dev/github.com/answerdev/answer) | 1.0.6 | < 1.0.6 |
Cross references:
- Module github.com/answerdev/answer appears in issue #1541 EFFECTIVELY_PRIVATE
- Module github.com/answerdev/answer appears in issue #1550 NOT_IMPORTABLE
- Module github.com/answerdev/answer appears in issue #1551 NOT_IMPORTABLE
- Module github.com/answerdev/answer appears in issue #1552 EFFECTIVELY_PRIVATE
- Module github.com/answerdev/answer appears in issue #1553 NOT_IMPORTABLE
- Module github.com/answerdev/answer appears in issue #1554 EFFECTIVELY_PRIVATE
- Module github.com/answerdev/answer appears in issue #1592 NOT_IMPORTABLE
See [doc/triage.md](https://github.com/golang/vulndb/blob/master/doc/triage.md) for instructions on how to triage this report.
```
modules:
- module: github.com/answerdev/answer
versions:
- fixed: 1.0.6
packages:
- package: github.com/answerdev/answer
description: Cross-site Scripting (XSS) - Stored in GitHub repository answerdev/answer
prior to 1.0.6.
cves:
- CVE-2023-1242
ghsas:
- GHSA-qrwm-xqfr-4vhv
references:
- web: https://nvd.nist.gov/vuln/detail/CVE-2023-1242
- fix: https://github.com/answerdev/answer/commit/90bfa0dcc7b49482f1d1e31aee3ab073f3c13dd9
- web: https://huntr.dev/bounties/71c24c5e-ceb2-45cf-bda7-fa195d37e289
- advisory: https://github.com/advisories/GHSA-qrwm-xqfr-4vhv
```",True,"x/vulndb: potential Go vuln in github.com/answerdev/answer: GHSA-qrwm-xqfr-4vhv - In GitHub Security Advisory [GHSA-qrwm-xqfr-4vhv](https://github.com/advisories/GHSA-qrwm-xqfr-4vhv), there is a vulnerability in the following Go packages or modules:
| Unit | Fixed | Vulnerable Ranges |
| - | - | - |
| [github.com/answerdev/answer](https://pkg.go.dev/github.com/answerdev/answer) | 1.0.6 | < 1.0.6 |
Cross references:
- Module github.com/answerdev/answer appears in issue #1541 EFFECTIVELY_PRIVATE
- Module github.com/answerdev/answer appears in issue #1550 NOT_IMPORTABLE
- Module github.com/answerdev/answer appears in issue #1551 NOT_IMPORTABLE
- Module github.com/answerdev/answer appears in issue #1552 EFFECTIVELY_PRIVATE
- Module github.com/answerdev/answer appears in issue #1553 NOT_IMPORTABLE
- Module github.com/answerdev/answer appears in issue #1554 EFFECTIVELY_PRIVATE
- Module github.com/answerdev/answer appears in issue #1592 NOT_IMPORTABLE
See [doc/triage.md](https://github.com/golang/vulndb/blob/master/doc/triage.md) for instructions on how to triage this report.
```
modules:
- module: github.com/answerdev/answer
versions:
- fixed: 1.0.6
packages:
- package: github.com/answerdev/answer
description: Cross-site Scripting (XSS) - Stored in GitHub repository answerdev/answer
prior to 1.0.6.
cves:
- CVE-2023-1242
ghsas:
- GHSA-qrwm-xqfr-4vhv
references:
- web: https://nvd.nist.gov/vuln/detail/CVE-2023-1242
- fix: https://github.com/answerdev/answer/commit/90bfa0dcc7b49482f1d1e31aee3ab073f3c13dd9
- web: https://huntr.dev/bounties/71c24c5e-ceb2-45cf-bda7-fa195d37e289
- advisory: https://github.com/advisories/GHSA-qrwm-xqfr-4vhv
```",1,x vulndb potential go vuln in github com answerdev answer ghsa qrwm xqfr in github security advisory there is a vulnerability in the following go packages or modules unit fixed vulnerable ranges cross references module github com answerdev answer appears in issue effectively private module github com answerdev answer appears in issue not importable module github com answerdev answer appears in issue not importable module github com answerdev answer appears in issue effectively private module github com answerdev answer appears in issue not importable module github com answerdev answer appears in issue effectively private module github com answerdev answer appears in issue not importable see for instructions on how to triage this report modules module github com answerdev answer versions fixed packages package github com answerdev answer description cross site scripting xss stored in github repository answerdev answer prior to cves cve ghsas ghsa qrwm xqfr references web fix web advisory ,1
451648,13039715213.0,IssuesEvent,2020-07-28 17:13:38,ntop/ntopng,https://api.github.com/repos/ntop/ntopng,opened,Pool pills ordering,priority ticket user interface,"For pool pills

use the same order as

Make sure the first entry (and not the second) is selected when entering the page",1.0,"Pool pills ordering - For pool pills

use the same order as

Make sure the first entry (and not the second) is selected when entering the page",0,pool pills ordering for pool pills use the same order as make sure the first entry and not the second is selected when entering the page,0
1144,14618686932.0,IssuesEvent,2020-12-22 16:36:45,Alistair-Bell/MageEngine,https://api.github.com/repos/Alistair-Bell/MageEngine,closed,Cannot find glslc / glslangValidator (Windows),bug portability,"Doing some code porting and for some reason the absolute path is broken, will need to find the bug to the issue",True,"Cannot find glslc / glslangValidator (Windows) - Doing some code porting and for some reason the absolute path is broken, will need to find the bug to the issue",1,cannot find glslc glslangvalidator windows doing some code porting and for some reason the absolute path is broken will need to find the bug to the issue,1
33134,27251680184.0,IssuesEvent,2023-02-22 08:36:54,woocommerce/woocommerce,https://api.github.com/repos/woocommerce/woocommerce,closed,Add milestone to PRs based on paths,type: task tool: monorepo infrastructure,"Currently our automated workflow is to add milestones to PRs when it also contains the label `plugins: woocommerce`. However this can sometimes be missed if the PR does not contain this label and gets merged [for example](https://github.com/woocommerce/woocommerce/pull/34382).
The [workflow](https://github.com/woocommerce/woocommerce/blob/trunk/.github/workflows/pull-request-post-merge-processing.yml) had an initial acceptance criteria set [here](36-gh-woocommerce/platform-private). I believe it was so that other items in the monorepo such as tools does not get picked up as a release PR.
The better solution ""if possible"" perhaps is to exclude the paths that are not going into WooCommerce release. For example https://github.com/woocommerce/woocommerce/tree/trunk/tools
Acceptance criteria:
* Add milestone to PRs only if the PR contains work that goes into the WooCommerce release.",1.0,"Add milestone to PRs based on paths - Currently our automated workflow is to add milestones to PRs when it also contains the label `plugins: woocommerce`. However this can sometimes be missed if the PR does not contain this label and gets merged [for example](https://github.com/woocommerce/woocommerce/pull/34382).
The [workflow](https://github.com/woocommerce/woocommerce/blob/trunk/.github/workflows/pull-request-post-merge-processing.yml) had an initial acceptance criteria set [here](36-gh-woocommerce/platform-private). I believe it was so that other items in the monorepo such as tools does not get picked up as a release PR.
The better solution ""if possible"" perhaps is to exclude the paths that are not going into WooCommerce release. For example https://github.com/woocommerce/woocommerce/tree/trunk/tools
Acceptance criteria:
* Add milestone to PRs only if the PR contains work that goes into the WooCommerce release.",0,add milestone to prs based on paths currently our automated workflow is to add milestones to prs when it also contains the label plugins woocommerce however this can sometimes be missed if the pr does not contain this label and gets merged the had an initial acceptance criteria set gh woocommerce platform private i believe it was so that other items in the monorepo such as tools does not get picked up as a release pr the better solution if possible perhaps is to exclude the paths that are not going into woocommerce release for example acceptance criteria add milestone to prs only if the pr contains work that goes into the woocommerce release ,0
1980,31162389481.0,IssuesEvent,2023-08-16 16:57:52,Azure/azure-functions-host,https://api.github.com/repos/Azure/azure-functions-host,closed,No-error-message: Setting extension bundle in dotnet-isolated results in 0 Functions started,Supportability,"If I use one of the common host.json files that asserts extension bundles, the deployed app will silently fail to load with 0 functions started.
I expected to get an error message saying extension bundle config in host.json is invalid, and read to learn how to set properly for dotnet-isolated.
Repros in cloud and using Core Tools 4.0.5198.
#### Repro steps
1. func init -> dotnet-isolated
2. func new -> HTTP
3. Change host.json to this:
```
{
""version"": ""2.0"",
""logging"": { ""fileLoggingMode"": ""debugOnly"", ""logLevel"": { ""default"": ""Information"", ""Host.Results"": ""Error"", ""Function"": ""Error"", ""Host.Aggregator"": ""Trace"" } },
""extensionBundle"": {
""id"": ""Microsoft.Azure.Functions.ExtensionBundle"",
""version"": ""[4.0.0, 5.0.0)""
}
}
```
4. func start, or deploy
#### Expected behavior
In Application insights logs I would see a specific message after 0 Functions start:
""ExtensionsBundle config in host.json is invalid for dotnet-isolated. See for details.""
",True,"No-error-message: Setting extension bundle in dotnet-isolated results in 0 Functions started - If I use one of the common host.json files that asserts extension bundles, the deployed app will silently fail to load with 0 functions started.
I expected to get an error message saying extension bundle config in host.json is invalid, and read to learn how to set properly for dotnet-isolated.
Repros in cloud and using Core Tools 4.0.5198.
#### Repro steps
1. func init -> dotnet-isolated
2. func new -> HTTP
3. Change host.json to this:
```
{
""version"": ""2.0"",
""logging"": { ""fileLoggingMode"": ""debugOnly"", ""logLevel"": { ""default"": ""Information"", ""Host.Results"": ""Error"", ""Function"": ""Error"", ""Host.Aggregator"": ""Trace"" } },
""extensionBundle"": {
""id"": ""Microsoft.Azure.Functions.ExtensionBundle"",
""version"": ""[4.0.0, 5.0.0)""
}
}
```
4. func start, or deploy
#### Expected behavior
In Application insights logs I would see a specific message after 0 Functions start:
""ExtensionsBundle config in host.json is invalid for dotnet-isolated. See for details.""
",1,no error message setting extension bundle in dotnet isolated results in functions started if i use one of the common host json files that asserts extension bundles the deployed app will silently fail to load with functions started i expected to get an error message saying extension bundle config in host json is invalid and read to learn how to set properly for dotnet isolated repros in cloud and using core tools repro steps func init dotnet isolated func new http change host json to this version logging fileloggingmode debugonly loglevel default information host results error function error host aggregator trace extensionbundle id microsoft azure functions extensionbundle version func start or deploy expected behavior in application insights logs i would see a specific message after functions start extensionsbundle config in host json is invalid for dotnet isolated see for details ,1
19921,11348737176.0,IssuesEvent,2020-01-24 01:33:42,Azure/azure-cli,https://api.github.com/repos/Azure/azure-cli,reopened,"az commands can trigger 429 / ""too many requests"" failures and provides no recourse for recovery.",AKS Core Service Attention,"## Describe the bug
Running az commands can generate 429 ""too many requests"" exceptions from Azure (possibly related to `az aks`? or possibly all commands -- I've definitely seen this at random from Azure before). It seems this happens with long running commands after they have already executed and az is polling for a result from Azure.
Ideally, when this happens, az should just [exponentially backoff](https://en.wikipedia.org/wiki/Exponential_backoff) (i.e. increase the timeout and try again). (Sometimes in the 429 response, there is even a `Retry-After` header that tells you exactly how long to wait!)
IMO, the *REAL* issue is that, you get back a failure message, and the command aborts, with no results -- **even if the command was successful** (e.g. you can't even just try to rerun the command at that point). -- Basically, the command shouldn't throw a perma-error unless it has actually failed. If the command is still running and might possibly succeed but you just failed to poll for a result, you should do a backoff and retry.
**Command Name**
`az aks nodepool add --resource-group MyResourceGroup --cluster-name MyClusterName --os-type Windows --node-vm-size ""Standard_B2s"" --name window --node-count 2 --kubernetes-version 1.13.12 --min-count 2 --max-count 6 --enable-cluster-autoscaler`
**Errors:**
```
WARNING: The behavior of this command has been altered by the following extension: aks-preview
ERROR: Deployment failed. Correlation ID: de22582b-9a0c-462b-b15a-7fd3d85d07e2. VMSSAgentPoolReconciler retry failed: autorest/azure: Service returned an error. Status=429
Code=""OperationNotAllowed"" Message=""The server rejected the request because too many requests have been received for this subscription."" Details=[{""code"":""TooManyRequests"",""message"":""{\""operationGroup\"":\
""HighCostGetVMScaleSet30Min\"",\""startTime\"":\""2020-01-17T17:29:36.1768987+00:00\"",\""endTime\"":\""2020-01-17T17:44:36.1768987+00:00\"",\""allowedRequestCount\"":1329,\""measuredRequestCount\"":1419}"",""target"":""H
ighCostGetVMScaleSet30Min""}] InnerError={""internalErrorCode"":""TooManyRequestsReceived""}
```
## To Reproduce:
Steps to reproduce the behavior.
- Run a long running command that continually polls azure for a result while your subscription is under heavy load (possibly from other such commands running in parallel?), until an http response with a status of 429 (""Too many requests"") is returned by the Azure API that is being called.
## Expected Behavior
- Az.exe shouldn't fail when the initial command turns out to be successful -- because it leaves the user in an unrecoverable state (e.g. the initial command appears to have failed, there is no output results, and re-running the command also fails, because now the resource exists! -- so you not only don't handle the 429 yourself, but you prevent the user from handling it too!).
- Specifically, calls to Azure made by Az.exe which return a 429 status should have transient fault handling baked in -- as specified by MSDN, [best practices for cloud applications](https://docs.microsoft.com/en-us/azure/architecture/best-practices/transient-faults): `All applications that communicate with remote services and resources must be sensitive to transient faults.`
## Environment Summary
```
Windows-10-10.0.18362-SP0
Python 3.6.6
Shell: cmd.exe
azure-cli 2.0.80
Extensions:
aks-preview 0.4.27
application-insights 0.1.1
```
## Additional Context
",1.0,"az commands can trigger 429 / ""too many requests"" failures and provides no recourse for recovery. - ## Describe the bug
Running az commands can generate 429 ""too many requests"" exceptions from Azure (possibly related to `az aks`? or possibly all commands -- I've definitely seen this at random from Azure before). It seems this happens with long running commands after they have already executed and az is polling for a result from Azure.
Ideally, when this happens, az should just [exponentially backoff](https://en.wikipedia.org/wiki/Exponential_backoff) (i.e. increase the timeout and try again). (Sometimes in the 429 response, there is even a `Retry-After` header that tells you exactly how long to wait!)
IMO, the *REAL* issue is that, you get back a failure message, and the command aborts, with no results -- **even if the command was successful** (e.g. you can't even just try to rerun the command at that point). -- Basically, the command shouldn't throw a perma-error unless it has actually failed. If the command is still running and might possibly succeed but you just failed to poll for a result, you should do a backoff and retry.
**Command Name**
`az aks nodepool add --resource-group MyResourceGroup --cluster-name MyClusterName --os-type Windows --node-vm-size ""Standard_B2s"" --name window --node-count 2 --kubernetes-version 1.13.12 --min-count 2 --max-count 6 --enable-cluster-autoscaler`
**Errors:**
```
WARNING: The behavior of this command has been altered by the following extension: aks-preview
ERROR: Deployment failed. Correlation ID: de22582b-9a0c-462b-b15a-7fd3d85d07e2. VMSSAgentPoolReconciler retry failed: autorest/azure: Service returned an error. Status=429
Code=""OperationNotAllowed"" Message=""The server rejected the request because too many requests have been received for this subscription."" Details=[{""code"":""TooManyRequests"",""message"":""{\""operationGroup\"":\
""HighCostGetVMScaleSet30Min\"",\""startTime\"":\""2020-01-17T17:29:36.1768987+00:00\"",\""endTime\"":\""2020-01-17T17:44:36.1768987+00:00\"",\""allowedRequestCount\"":1329,\""measuredRequestCount\"":1419}"",""target"":""H
ighCostGetVMScaleSet30Min""}] InnerError={""internalErrorCode"":""TooManyRequestsReceived""}
```
## To Reproduce:
Steps to reproduce the behavior.
- Run a long running command that continually polls azure for a result while your subscription is under heavy load (possibly from other such commands running in parallel?), until an http response with a status of 429 (""Too many requests"") is returned by the Azure API that is being called.
## Expected Behavior
- Az.exe shouldn't fail when the initial command turns out to be successful -- because it leaves the user in an unrecoverable state (e.g. the initial command appears to have failed, there is no output results, and re-running the command also fails, because now the resource exists! -- so you not only don't handle the 429 yourself, but you prevent the user from handling it too!).
- Specifically, calls to Azure made by Az.exe which return a 429 status should have transient fault handling baked in -- as specified by MSDN, [best practices for cloud applications](https://docs.microsoft.com/en-us/azure/architecture/best-practices/transient-faults): `All applications that communicate with remote services and resources must be sensitive to transient faults.`
## Environment Summary
```
Windows-10-10.0.18362-SP0
Python 3.6.6
Shell: cmd.exe
azure-cli 2.0.80
Extensions:
aks-preview 0.4.27
application-insights 0.1.1
```
## Additional Context
",0,az commands can trigger too many requests failures and provides no recourse for recovery describe the bug running az commands can generate too many requests exceptions from azure possibly related to az aks or possibly all commands i ve definitely seen this at random from azure before it seems this happens with long running commands after they have already executed and az is polling for a result from azure ideally when this happens az should just i e increase the timeout and try again sometimes in the response there is even a retry after header that tells you exactly how long to wait imo the real issue is that you get back a failure message and the command aborts with no results even if the command was successful e g you can t even just try to rerun the command at that point basically the command shouldn t throw a perma error unless it has actually failed if the command is still running and might possibly succeed but you just failed to poll for a result you should do a backoff and retry command name az aks nodepool add resource group myresourcegroup cluster name myclustername os type windows node vm size standard name window node count kubernetes version min count max count enable cluster autoscaler errors warning the behavior of this command has been altered by the following extension aks preview error deployment failed correlation id vmssagentpoolreconciler retry failed autorest azure service returned an error status code operationnotallowed message the server rejected the request because too many requests have been received for this subscription details code toomanyrequests message operationgroup starttime endtime allowedrequestcount measuredrequestcount target h innererror internalerrorcode toomanyrequestsreceived to reproduce steps to reproduce the behavior run a long running command that continually polls azure for a result while your subscription is under heavy load possibly from other such commands running in parallel until an http response with a status of too many requests is returned by the azure api that is being called expected behavior az exe shouldn t fail when the initial command turns out to be successful because it leaves the user in an unrecoverable state e g the initial command appears to have failed there is no output results and re running the command also fails because now the resource exists so you not only don t handle the yourself but you prevent the user from handling it too specifically calls to azure made by az exe which return a status should have transient fault handling baked in as specified by msdn all applications that communicate with remote services and resources must be sensitive to transient faults environment summary windows python shell cmd exe azure cli extensions aks preview application insights additional context ,0
167336,6336666726.0,IssuesEvent,2017-07-26 21:37:34,angular/angular-cli,https://api.github.com/repos/angular/angular-cli,closed,Im making a starter angular 4 project and it's getting me a warning,priority: 2 (required) severity2: inconvenient,"### [BUG] Im making a starter angular 4 project and it's getting me a warning
### Versions.
@angular/cli: 1.1.3 (e)
node: 6.9.4
os: win32 x64
@angular/animations: 4.2.5
@angular/common: 4.2.5
@angular/compiler: 4.2.5
@angular/core: 4.2.5
@angular/forms: 4.2.5
@angular/http: 4.2.5
@angular/platform-browser: 4.2.5
@angular/platform-browser-dynamic: 4.2.5
@angular/router: 4.2.5
@angular/cli: 1.1.3
@angular/compiler-cli: 4.2.5
@angular/language-service: 4.2.5
@ngtools/webpack: 1.5.0
### Repro steps.
1. I make a clean new project with angular-cli running ng new ""project""
2. I run ng-eject to convert project into webpack
3. I make another npm install for new dependencies
4. I run npm run start
### The log given by the failure.
```
WARNING in ./~/@angular/compiler/@angular/compiler.es5.js
(Emitted value instead of an instance of Error) Cannot find source file 'compiler.es5.ts': Error: Can't resolve './compiler.es5.ts' in 'C:\Users\jose.segura\WebstormProjects\untitled\testt\node_modules\@angular\compiler
\@angular'
@ ./~/@angular/platform-browser-dynamic/@angular/platform-browser-dynamic.es5.js 7:0-72
@ ./src/main.ts
@ multi (webpack)-dev-server/client?http://localhost:4200 ./src/main.ts
```
### Desired functionality.
The project runs normally but i dont know if this warning will throw errors in the future
",1.0,"Im making a starter angular 4 project and it's getting me a warning - ### [BUG] Im making a starter angular 4 project and it's getting me a warning
### Versions.
@angular/cli: 1.1.3 (e)
node: 6.9.4
os: win32 x64
@angular/animations: 4.2.5
@angular/common: 4.2.5
@angular/compiler: 4.2.5
@angular/core: 4.2.5
@angular/forms: 4.2.5
@angular/http: 4.2.5
@angular/platform-browser: 4.2.5
@angular/platform-browser-dynamic: 4.2.5
@angular/router: 4.2.5
@angular/cli: 1.1.3
@angular/compiler-cli: 4.2.5
@angular/language-service: 4.2.5
@ngtools/webpack: 1.5.0
### Repro steps.
1. I make a clean new project with angular-cli running ng new ""project""
2. I run ng-eject to convert project into webpack
3. I make another npm install for new dependencies
4. I run npm run start
### The log given by the failure.
```
WARNING in ./~/@angular/compiler/@angular/compiler.es5.js
(Emitted value instead of an instance of Error) Cannot find source file 'compiler.es5.ts': Error: Can't resolve './compiler.es5.ts' in 'C:\Users\jose.segura\WebstormProjects\untitled\testt\node_modules\@angular\compiler
\@angular'
@ ./~/@angular/platform-browser-dynamic/@angular/platform-browser-dynamic.es5.js 7:0-72
@ ./src/main.ts
@ multi (webpack)-dev-server/client?http://localhost:4200 ./src/main.ts
```
### Desired functionality.
The project runs normally but i dont know if this warning will throw errors in the future
",0,im making a starter angular project and it s getting me a warning im making a starter angular project and it s getting me a warning versions angular cli e node os angular animations angular common angular compiler angular core angular forms angular http angular platform browser angular platform browser dynamic angular router angular cli angular compiler cli angular language service ngtools webpack repro steps i make a clean new project with angular cli running ng new project i run ng eject to convert project into webpack i make another npm install for new dependencies i run npm run start the log given by the failure warning in angular compiler angular compiler js emitted value instead of an instance of error cannot find source file compiler ts error can t resolve compiler ts in c users jose segura webstormprojects untitled testt node modules angular compiler angular angular platform browser dynamic angular platform browser dynamic js src main ts multi webpack dev server client src main ts desired functionality the project runs normally but i dont know if this warning will throw errors in the future ,0
80,3008015774.0,IssuesEvent,2015-07-27 19:01:09,magnumripper/JohnTheRipper,https://api.github.com/repos/magnumripper/JohnTheRipper,opened,OSX bugs w/ AMD device,portability,"I presume these are driver bugs but we should try working around them (might even be a single issue causing all problems).
http://www.openwall.com/lists/john-users/2015/07/27/1
http://www.openwall.com/lists/john-users/2015/07/27/2",True,"OSX bugs w/ AMD device - I presume these are driver bugs but we should try working around them (might even be a single issue causing all problems).
http://www.openwall.com/lists/john-users/2015/07/27/1
http://www.openwall.com/lists/john-users/2015/07/27/2",1,osx bugs w amd device i presume these are driver bugs but we should try working around them might even be a single issue causing all problems ,1
90532,26132479138.0,IssuesEvent,2022-12-29 07:38:24,spack/spack,https://api.github.com/repos/spack/spack,opened,Installation issue: esmf,build-error,"### Steps to reproduce the issue
ESMF fails to compile on NOAA WCOSS2/Acorn systems with Cray PE+Intel compilers+Cray MPICH. Builds successfully if ESMF_OS=Linux and ESMF_COMM=mpich3, but esmf/package.py sets those to Unicos and mpi, respectively.
```console
$ spack spec -l esmf
Input spec
--------------------------------
esmf
Concretized
--------------------------------
6rxazwx esmf@8.3.0b09%intel@19.1.3.304~debug~external-lapack+mpi+netcdf~parallelio+pio~pnetcdf~shared~xerces build_system=makefile arch=linux-sles15-zen2
zhz3fn4 ^cray-mpich@8.1.9%intel@19.1.3.304~wrappers build_system=generic arch=linux-sles15-zen2
7vjuutq ^libxml2@2.10.3%intel@19.1.3.304~python build_system=autotools arch=linux-sles15-zen2
inla2gw ^libiconv@1.16%intel@19.1.3.304 build_system=autotools libs=shared,static arch=linux-sles15-zen2
eyz7fdy ^pkg-config@0.29.2%intel@19.1.3.304+internal_glib build_system=autotools arch=linux-sles15-zen2
ilrrzhx ^xz@5.2.6%intel@19.1.3.304~pic build_system=autotools libs=shared,static arch=linux-sles15-zen2
e65xq45 ^netcdf-c@4.7.4%intel@19.1.3.304~dap~fsync~hdf4~jna+mpi+optimize~parallel-netcdf+pic~shared build_system=autotools arch=linux-sles15-zen2
xc2opw5 ^hdf5@1.10.6%intel@19.1.3.304~cxx+fortran+hl~ipo~java+mpi~shared~szip+threadsafe+tools api=default build_system=cmake build_type=RelWithDebInfo arch=linux-sles15-zen2
utbfs5w ^cmake@3.20.2%intel@19.1.3.304~doc+ncurses+ownlibs~qt build_system=generic build_type=Release arch=linux-sles15-zen2
sj5fkki ^m4@1.4.18%intel@19.1.3.304+sigsegv build_system=autotools patches=3877ab5,fc9b616 arch=linux-sles15-zen2
cry7dbu ^netcdf-fortran@4.5.4%intel@19.1.3.304~doc+pic~shared build_system=autotools arch=linux-sles15-zen2
ij722et ^zlib@1.2.11%intel@19.1.3.304+optimize+pic~shared build_system=makefile arch=linux-sles15-zen2
...
```
@climbfuji @jedwards4b
### Error message
Here is a sampling of the error messages (other undefined references include mpi_send_, mpi_wait_, etc.):
Error message
>> 13388 /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /path/to/cache/build_stage/spack-stage-esmf-8.3.0b09-jmblchtb6cpbt4ot6f22x45qwyrju334/spack-src/src/Infrastr
ucture/IO/PIO/piodarray.F90.in:162: undefined reference to `mpi_bcast_'
>> 13389 /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /path/to/cache/build_stage/spack-stage-esmf-8.3.0b09-jmblchtb6cpbt4ot6f22x45qwyrju334/spack-src/src/Infrastr
ucture/IO/PIO/piodarray.F90.in:163: undefined reference to `mpi_bcast_'
>> 13390 /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /path/to/cache/build_stage/spack-stage-esmf-8.3.0b09-jmblchtb6cpbt4ot6f22x45qwyrju334/spack-src/src/Infrastr
ucture/IO/PIO/piodarray.F90.in:164: undefined reference to `mpi_bcast_'
>> 13391 /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /path/to/cache/build_stage/spack-stage-esmf-8.3.0b09-jmblchtb6cpbt4ot6f22x45qwyrju334/spack-src/src/Infrastr
ucture/IO/PIO/piodarray.F90.in:165: undefined reference to `mpi_bcast_'
### Information on your system
Lmod modules in ESMF build env:
1) craype-x86-rome (H) 2) envvar/1.0 3) PrgEnv-intel/8.3.3 4) intel/19.1.3.304 5) craype/2.7.13 6) libfabric/1.11.0.0. (H) 7) craype-network-ofi (H) 8) cray-mpich/8.1.9
spack debug report:
* **Spack:** 0.20.0.dev0 (3a152138df8f6db58019b1b8d19b75a0bbbd0c23)
* **Python:** 3.6.15
* **Platform:** linux-sles15-zen2
* **Concretizer:** clingo
### Additional information
[spack-build-env.txt](https://github.com/spack/spack/files/10318160/spack-build-env.txt)
[spack-build-out.txt](https://github.com/spack/spack/files/10318162/spack-build-out.txt)
### General information
- [X] I have run `spack debug report` and reported the version of Spack/Python/Platform
- [X] I have run `spack maintainers ` and **@mentioned** any maintainers
- [X] I have uploaded the build log and environment files
- [X] I have searched the issues of this repo and believe this is not a duplicate",1.0,"Installation issue: esmf - ### Steps to reproduce the issue
ESMF fails to compile on NOAA WCOSS2/Acorn systems with Cray PE+Intel compilers+Cray MPICH. Builds successfully if ESMF_OS=Linux and ESMF_COMM=mpich3, but esmf/package.py sets those to Unicos and mpi, respectively.
```console
$ spack spec -l esmf
Input spec
--------------------------------
esmf
Concretized
--------------------------------
6rxazwx esmf@8.3.0b09%intel@19.1.3.304~debug~external-lapack+mpi+netcdf~parallelio+pio~pnetcdf~shared~xerces build_system=makefile arch=linux-sles15-zen2
zhz3fn4 ^cray-mpich@8.1.9%intel@19.1.3.304~wrappers build_system=generic arch=linux-sles15-zen2
7vjuutq ^libxml2@2.10.3%intel@19.1.3.304~python build_system=autotools arch=linux-sles15-zen2
inla2gw ^libiconv@1.16%intel@19.1.3.304 build_system=autotools libs=shared,static arch=linux-sles15-zen2
eyz7fdy ^pkg-config@0.29.2%intel@19.1.3.304+internal_glib build_system=autotools arch=linux-sles15-zen2
ilrrzhx ^xz@5.2.6%intel@19.1.3.304~pic build_system=autotools libs=shared,static arch=linux-sles15-zen2
e65xq45 ^netcdf-c@4.7.4%intel@19.1.3.304~dap~fsync~hdf4~jna+mpi+optimize~parallel-netcdf+pic~shared build_system=autotools arch=linux-sles15-zen2
xc2opw5 ^hdf5@1.10.6%intel@19.1.3.304~cxx+fortran+hl~ipo~java+mpi~shared~szip+threadsafe+tools api=default build_system=cmake build_type=RelWithDebInfo arch=linux-sles15-zen2
utbfs5w ^cmake@3.20.2%intel@19.1.3.304~doc+ncurses+ownlibs~qt build_system=generic build_type=Release arch=linux-sles15-zen2
sj5fkki ^m4@1.4.18%intel@19.1.3.304+sigsegv build_system=autotools patches=3877ab5,fc9b616 arch=linux-sles15-zen2
cry7dbu ^netcdf-fortran@4.5.4%intel@19.1.3.304~doc+pic~shared build_system=autotools arch=linux-sles15-zen2
ij722et ^zlib@1.2.11%intel@19.1.3.304+optimize+pic~shared build_system=makefile arch=linux-sles15-zen2
...
```
@climbfuji @jedwards4b
### Error message
Here is a sampling of the error messages (other undefined references include mpi_send_, mpi_wait_, etc.):
Error message
>> 13388 /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /path/to/cache/build_stage/spack-stage-esmf-8.3.0b09-jmblchtb6cpbt4ot6f22x45qwyrju334/spack-src/src/Infrastr
ucture/IO/PIO/piodarray.F90.in:162: undefined reference to `mpi_bcast_'
>> 13389 /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /path/to/cache/build_stage/spack-stage-esmf-8.3.0b09-jmblchtb6cpbt4ot6f22x45qwyrju334/spack-src/src/Infrastr
ucture/IO/PIO/piodarray.F90.in:163: undefined reference to `mpi_bcast_'
>> 13390 /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /path/to/cache/build_stage/spack-stage-esmf-8.3.0b09-jmblchtb6cpbt4ot6f22x45qwyrju334/spack-src/src/Infrastr
ucture/IO/PIO/piodarray.F90.in:164: undefined reference to `mpi_bcast_'
>> 13391 /usr/lib64/gcc/x86_64-suse-linux/7/../../../../x86_64-suse-linux/bin/ld: /path/to/cache/build_stage/spack-stage-esmf-8.3.0b09-jmblchtb6cpbt4ot6f22x45qwyrju334/spack-src/src/Infrastr
ucture/IO/PIO/piodarray.F90.in:165: undefined reference to `mpi_bcast_'
### Information on your system
Lmod modules in ESMF build env:
1) craype-x86-rome (H) 2) envvar/1.0 3) PrgEnv-intel/8.3.3 4) intel/19.1.3.304 5) craype/2.7.13 6) libfabric/1.11.0.0. (H) 7) craype-network-ofi (H) 8) cray-mpich/8.1.9
spack debug report:
* **Spack:** 0.20.0.dev0 (3a152138df8f6db58019b1b8d19b75a0bbbd0c23)
* **Python:** 3.6.15
* **Platform:** linux-sles15-zen2
* **Concretizer:** clingo
### Additional information
[spack-build-env.txt](https://github.com/spack/spack/files/10318160/spack-build-env.txt)
[spack-build-out.txt](https://github.com/spack/spack/files/10318162/spack-build-out.txt)
### General information
- [X] I have run `spack debug report` and reported the version of Spack/Python/Platform
- [X] I have run `spack maintainers ` and **@mentioned** any maintainers
- [X] I have uploaded the build log and environment files
- [X] I have searched the issues of this repo and believe this is not a duplicate",0,installation issue esmf steps to reproduce the issue esmf fails to compile on noaa acorn systems with cray pe intel compilers cray mpich builds successfully if esmf os linux and esmf comm but esmf package py sets those to unicos and mpi respectively console spack spec l esmf input spec esmf concretized esmf intel debug external lapack mpi netcdf parallelio pio pnetcdf shared xerces build system makefile arch linux cray mpich intel wrappers build system generic arch linux intel python build system autotools arch linux libiconv intel build system autotools libs shared static arch linux pkg config intel internal glib build system autotools arch linux ilrrzhx xz intel pic build system autotools libs shared static arch linux netcdf c intel dap fsync jna mpi optimize parallel netcdf pic shared build system autotools arch linux intel cxx fortran hl ipo java mpi shared szip threadsafe tools api default build system cmake build type relwithdebinfo arch linux cmake intel doc ncurses ownlibs qt build system generic build type release arch linux intel sigsegv build system autotools patches arch linux netcdf fortran intel doc pic shared build system autotools arch linux zlib intel optimize pic shared build system makefile arch linux climbfuji error message here is a sampling of the error messages other undefined references include mpi send mpi wait etc error message usr gcc suse linux suse linux bin ld path to cache build stage spack stage esmf spack src src infrastr ucture io pio piodarray in undefined reference to mpi bcast usr gcc suse linux suse linux bin ld path to cache build stage spack stage esmf spack src src infrastr ucture io pio piodarray in undefined reference to mpi bcast usr gcc suse linux suse linux bin ld path to cache build stage spack stage esmf spack src src infrastr ucture io pio piodarray in undefined reference to mpi bcast usr gcc suse linux suse linux bin ld path to cache build stage spack stage esmf spack src src infrastr ucture io pio piodarray in undefined reference to mpi bcast information on your system lmod modules in esmf build env craype rome h envvar prgenv intel intel craype libfabric h craype network ofi h cray mpich spack debug report spack python platform linux concretizer clingo additional information general information i have run spack debug report and reported the version of spack python platform i have run spack maintainers and mentioned any maintainers i have uploaded the build log and environment files i have searched the issues of this repo and believe this is not a duplicate,0
630,8481676809.0,IssuesEvent,2018-10-25 16:20:28,arangodb/arangodb,https://api.github.com/repos/arangodb/arangodb,closed,Misleading error message on upgrade with old schema,1 Bug 2 Fixed supportability,"## My Environment
* __ArangoDB Version__: upgraded from 3.3 to 3.4
* __Deployment Mode__: Single Server
* __Deployment Strategy__: systemctl
* __Operating System__: Fedora
* __Used Package__: .rpm
__Problem__:
```
2018-10-14T10:01:47Z [15360] ERROR {startup} Database directory version (30316) is lower than current version (30400).
2018-10-14T10:01:47Z [15360] ERROR {startup} ----------------------------------------------------------------------
2018-10-14T10:01:47Z [15360] ERROR {startup} It seems like you have upgraded the ArangoDB binary.
2018-10-14T10:01:47Z [15360] ERROR {startup} If this is what you wanted to do, please restart with the'
2018-10-14T10:01:47Z [15360] ERROR {startup} --database.auto-upgrade true'
2018-10-14T10:01:47Z [15360] ERROR {startup} option to upgrade the data in the database directory.'
2018-10-14T10:01:47Z [15360] ERROR {startup} Normally you can use the control script to upgrade your database'
2018-10-14T10:01:47Z [15360] ERROR {startup} /etc/init.d/arangodb stop'
2018-10-14T10:01:47Z [15360] ERROR {startup} /etc/init.d/arangodb upgrade'
2018-10-14T10:01:47Z [15360] ERROR {startup} /etc/init.d/arangodb start'
2018-10-14T10:01:47Z [15360] ERROR {startup} ----------------------------------------------------------------------'
```
/etc/init.d/arangodb does not exist; calling ""update"" with systemctl also fails (unknown operation)
__Expected result__:
improve the error message and tell the correct location of stop/upgrade/start script",True,"Misleading error message on upgrade with old schema - ## My Environment
* __ArangoDB Version__: upgraded from 3.3 to 3.4
* __Deployment Mode__: Single Server
* __Deployment Strategy__: systemctl
* __Operating System__: Fedora
* __Used Package__: .rpm
__Problem__:
```
2018-10-14T10:01:47Z [15360] ERROR {startup} Database directory version (30316) is lower than current version (30400).
2018-10-14T10:01:47Z [15360] ERROR {startup} ----------------------------------------------------------------------
2018-10-14T10:01:47Z [15360] ERROR {startup} It seems like you have upgraded the ArangoDB binary.
2018-10-14T10:01:47Z [15360] ERROR {startup} If this is what you wanted to do, please restart with the'
2018-10-14T10:01:47Z [15360] ERROR {startup} --database.auto-upgrade true'
2018-10-14T10:01:47Z [15360] ERROR {startup} option to upgrade the data in the database directory.'
2018-10-14T10:01:47Z [15360] ERROR {startup} Normally you can use the control script to upgrade your database'
2018-10-14T10:01:47Z [15360] ERROR {startup} /etc/init.d/arangodb stop'
2018-10-14T10:01:47Z [15360] ERROR {startup} /etc/init.d/arangodb upgrade'
2018-10-14T10:01:47Z [15360] ERROR {startup} /etc/init.d/arangodb start'
2018-10-14T10:01:47Z [15360] ERROR {startup} ----------------------------------------------------------------------'
```
/etc/init.d/arangodb does not exist; calling ""update"" with systemctl also fails (unknown operation)
__Expected result__:
improve the error message and tell the correct location of stop/upgrade/start script",1,misleading error message on upgrade with old schema my environment arangodb version upgraded from to deployment mode single server deployment strategy systemctl operating system fedora used package rpm problem error startup database directory version is lower than current version error startup error startup it seems like you have upgraded the arangodb binary error startup if this is what you wanted to do please restart with the error startup database auto upgrade true error startup option to upgrade the data in the database directory error startup normally you can use the control script to upgrade your database error startup etc init d arangodb stop error startup etc init d arangodb upgrade error startup etc init d arangodb start error startup etc init d arangodb does not exist calling update with systemctl also fails unknown operation expected result improve the error message and tell the correct location of stop upgrade start script,1
27203,5319258427.0,IssuesEvent,2017-02-14 06:07:05,Houston-Inc/ppr.js,https://api.github.com/repos/Houston-Inc/ppr.js,opened,Library documentation,documentation,Each library class should have own documentation which explain how and when to use it. ,1.0,Library documentation - Each library class should have own documentation which explain how and when to use it. ,0,library documentation each library class should have own documentation which explain how and when to use it ,0
763274,26749934421.0,IssuesEvent,2023-01-30 18:44:50,GoogleCloudPlatform/alloydb-auth-proxy,https://api.github.com/repos/GoogleCloudPlatform/alloydb-auth-proxy,closed,cmd: TestPProfServer failed,type: bug priority: p2 flakybot: issue,"This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/main/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: 7ccd62a356d3a2af21958163f7814a085a2eb365
buildURL: https://github.com/GoogleCloudPlatform/alloydb-auth-proxy/actions/runs/4045923608
status: failed
Test output 2023/01/30 16:29:08 SIGINT signal received. Shutting down...
2023/01/30 16:29:08 The proxy has encountered a terminal error: unable to start: [proj.region.clust.inst] Unable to mount socket:
root_test.go:936: failed to dial endpoint: Get ""http://localhost:9191/debug/pprof/"": dial tcp [::1]:9191: connectex: No connection could be made because the target machine actively refused it. ",1.0,"cmd: TestPProfServer failed - This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/main/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: 7ccd62a356d3a2af21958163f7814a085a2eb365
buildURL: https://github.com/GoogleCloudPlatform/alloydb-auth-proxy/actions/runs/4045923608
status: failed
Test output 2023/01/30 16:29:08 SIGINT signal received. Shutting down...
2023/01/30 16:29:08 The proxy has encountered a terminal error: unable to start: [proj.region.clust.inst] Unable to mount socket:
root_test.go:936: failed to dial endpoint: Get ""http://localhost:9191/debug/pprof/"": dial tcp [::1]:9191: connectex: No connection could be made because the target machine actively refused it. ",0,cmd testpprofserver failed this test failed to configure my behavior see if i m commenting on this issue too often add the flakybot quiet label and i will stop commenting commit buildurl status failed test output sigint signal received shutting down the proxy has encountered a terminal error unable to start unable to mount socket root test go failed to dial endpoint get dial tcp connectex no connection could be made because the target machine actively refused it ,0
243,4805557116.0,IssuesEvent,2016-11-02 16:19:10,jemalloc/jemalloc,https://api.github.com/repos/jemalloc/jemalloc,closed,Lazy lock may be broken on Windows?,bug portability,"Currently the default for MinGW builds is to enable lazy locking (e.g. `--enable-lazy-lock` to the configure script), but it seems problematic for Windows. The `isthreaded` global variable indicates whether threads are active and locks should be used, but this is only set to `true` on a `DLL_THREAD_ATTACH` event on Windows (unlike on Unix where `pthread_create` is hooked).
[According to Windows](https://msdn.microsoft.com/en-us/library/windows/desktop/ms682583%28v=vs.85%29.aspx), however, ""the call is made in the context of the new thread"", and I believe this means that problems can later arise. For example, if thread A uses jemalloc, spawns thread B, then uses jemalloc some more, it could be the case that thread A enters a jemalloc critical section before thread B has used jemalloc, meaning it _doesn't grab a lock_. Later on thread B uses jemalloc, enabling future calls to locking functions. If thread A then exits the critical section, it will attempt to unlock a not-locked mutex, causing the process to abort.
I _think_ this means that lazy locking isn't possible on Windows to do reliably, and perhaps the default for MinGW should be switched back? The default was enabled after #83 was filed in 13473c7 but #83 didn't look like it ever came up with concrete evidence to why lazy locking was at fault, and it looks like a number of initialization changes to mutexes in jemalloc has changed in the meantime, so it's likely that this has been fixed under the hood.
We discovered this when updating Rust to use jemalloc 4.0.0 after a [few](https://github.com/rust-lang/rust/pull/28173) [failed](https://github.com/rust-lang/rust/pull/28304) [attempts](https://github.com/rust-lang/rust/pull/29214) which all seemed to bounce for spurious reasons.
---
Unfortunately I haven't been able to come up with a nice succinct test case to reproduce this. My current threshold is ""checkout Rust, update jemalloc, build, run one test 100 times and it'll die at least once"". I can try to reduce this down, however, if it would help! I figured I'd try to run the idea of lazy locking by here first though.
",True,"Lazy lock may be broken on Windows? - Currently the default for MinGW builds is to enable lazy locking (e.g. `--enable-lazy-lock` to the configure script), but it seems problematic for Windows. The `isthreaded` global variable indicates whether threads are active and locks should be used, but this is only set to `true` on a `DLL_THREAD_ATTACH` event on Windows (unlike on Unix where `pthread_create` is hooked).
[According to Windows](https://msdn.microsoft.com/en-us/library/windows/desktop/ms682583%28v=vs.85%29.aspx), however, ""the call is made in the context of the new thread"", and I believe this means that problems can later arise. For example, if thread A uses jemalloc, spawns thread B, then uses jemalloc some more, it could be the case that thread A enters a jemalloc critical section before thread B has used jemalloc, meaning it _doesn't grab a lock_. Later on thread B uses jemalloc, enabling future calls to locking functions. If thread A then exits the critical section, it will attempt to unlock a not-locked mutex, causing the process to abort.
I _think_ this means that lazy locking isn't possible on Windows to do reliably, and perhaps the default for MinGW should be switched back? The default was enabled after #83 was filed in 13473c7 but #83 didn't look like it ever came up with concrete evidence to why lazy locking was at fault, and it looks like a number of initialization changes to mutexes in jemalloc has changed in the meantime, so it's likely that this has been fixed under the hood.
We discovered this when updating Rust to use jemalloc 4.0.0 after a [few](https://github.com/rust-lang/rust/pull/28173) [failed](https://github.com/rust-lang/rust/pull/28304) [attempts](https://github.com/rust-lang/rust/pull/29214) which all seemed to bounce for spurious reasons.
---
Unfortunately I haven't been able to come up with a nice succinct test case to reproduce this. My current threshold is ""checkout Rust, update jemalloc, build, run one test 100 times and it'll die at least once"". I can try to reduce this down, however, if it would help! I figured I'd try to run the idea of lazy locking by here first though.
",1,lazy lock may be broken on windows currently the default for mingw builds is to enable lazy locking e g enable lazy lock to the configure script but it seems problematic for windows the isthreaded global variable indicates whether threads are active and locks should be used but this is only set to true on a dll thread attach event on windows unlike on unix where pthread create is hooked however the call is made in the context of the new thread and i believe this means that problems can later arise for example if thread a uses jemalloc spawns thread b then uses jemalloc some more it could be the case that thread a enters a jemalloc critical section before thread b has used jemalloc meaning it doesn t grab a lock later on thread b uses jemalloc enabling future calls to locking functions if thread a then exits the critical section it will attempt to unlock a not locked mutex causing the process to abort i think this means that lazy locking isn t possible on windows to do reliably and perhaps the default for mingw should be switched back the default was enabled after was filed in but didn t look like it ever came up with concrete evidence to why lazy locking was at fault and it looks like a number of initialization changes to mutexes in jemalloc has changed in the meantime so it s likely that this has been fixed under the hood we discovered this when updating rust to use jemalloc after a which all seemed to bounce for spurious reasons unfortunately i haven t been able to come up with a nice succinct test case to reproduce this my current threshold is checkout rust update jemalloc build run one test times and it ll die at least once i can try to reduce this down however if it would help i figured i d try to run the idea of lazy locking by here first though ,1
80401,10172133328.0,IssuesEvent,2019-08-08 09:57:22,links-lang/links,https://api.github.com/repos/links-lang/links,opened,User guide / language documentation,documentation,"We currently don't have any kind of User Guide/language documentation that would be maintained and up to date. I've been thinking we could start pretending that we have one. The idea is that we actually create a stub of a guide and every time we add a new feature to the language, change existing syntax, add new compiler setting, etc., we document it in the guide. We almost do this already: most of the features we implement get a fairly detailed description in PRs, tickets, or changelogs. There would be some extra effort needed to add this into the guide but it seems like we're already doing 80% of the documenting effort (which then sadly gets buried in old PRs that nobody reads). With time that should give us a fairly good documentation and perhaps then filling in the missing gaps won't be too hard.
Does that sound like a good idea? Would all the devs be willing to document new features they are adding?
As for the technical side I'm not sure what would be the best, but I'd be for some markup language. Maybe Sphinx? Seems fairly lightweight.
Related: #474",1.0,"User guide / language documentation - We currently don't have any kind of User Guide/language documentation that would be maintained and up to date. I've been thinking we could start pretending that we have one. The idea is that we actually create a stub of a guide and every time we add a new feature to the language, change existing syntax, add new compiler setting, etc., we document it in the guide. We almost do this already: most of the features we implement get a fairly detailed description in PRs, tickets, or changelogs. There would be some extra effort needed to add this into the guide but it seems like we're already doing 80% of the documenting effort (which then sadly gets buried in old PRs that nobody reads). With time that should give us a fairly good documentation and perhaps then filling in the missing gaps won't be too hard.
Does that sound like a good idea? Would all the devs be willing to document new features they are adding?
As for the technical side I'm not sure what would be the best, but I'd be for some markup language. Maybe Sphinx? Seems fairly lightweight.
Related: #474",0,user guide language documentation we currently don t have any kind of user guide language documentation that would be maintained and up to date i ve been thinking we could start pretending that we have one the idea is that we actually create a stub of a guide and every time we add a new feature to the language change existing syntax add new compiler setting etc we document it in the guide we almost do this already most of the features we implement get a fairly detailed description in prs tickets or changelogs there would be some extra effort needed to add this into the guide but it seems like we re already doing of the documenting effort which then sadly gets buried in old prs that nobody reads with time that should give us a fairly good documentation and perhaps then filling in the missing gaps won t be too hard does that sound like a good idea would all the devs be willing to document new features they are adding as for the technical side i m not sure what would be the best but i d be for some markup language maybe sphinx seems fairly lightweight related ,0
1263,16741706347.0,IssuesEvent,2021-06-11 10:37:21,primefaces/primevue,https://api.github.com/repos/primefaces/primevue,closed,"AccordionTab error with v-if=""false""",bug vue2-portable,"**I'm submitting a ...**
```
[x] bug report
[ ] feature request
[ ] support request
```
**CodeSandbox Case (Bug Reports)**
https://codesandbox.io/s/determined-glade-ks1s3?file=/src/App.vue
**Current behavior**
An AccordionTab with `v-if=""false""` causes a `TypeError: child.children.forEach is not a function` because `child` seems to be a string.
**Expected behavior**
An AccordionTab with `v-if=""false""` is just removed from the DOM.
* **Vue version:** 3.1.1
* **PrimeVue version:** 3.5.0
* **Browser:** [Chrome | ??? ]",True,"AccordionTab error with v-if=""false"" - **I'm submitting a ...**
```
[x] bug report
[ ] feature request
[ ] support request
```
**CodeSandbox Case (Bug Reports)**
https://codesandbox.io/s/determined-glade-ks1s3?file=/src/App.vue
**Current behavior**
An AccordionTab with `v-if=""false""` causes a `TypeError: child.children.forEach is not a function` because `child` seems to be a string.
**Expected behavior**
An AccordionTab with `v-if=""false""` is just removed from the DOM.
* **Vue version:** 3.1.1
* **PrimeVue version:** 3.5.0
* **Browser:** [Chrome | ??? ]",1,accordiontab error with v if false i m submitting a bug report feature request support request codesandbox case bug reports current behavior an accordiontab with v if false causes a typeerror child children foreach is not a function because child seems to be a string expected behavior an accordiontab with v if false is just removed from the dom vue version primevue version browser ,1
1938,30506208893.0,IssuesEvent,2023-07-18 17:04:54,jqlang/jq,https://api.github.com/repos/jqlang/jq,closed,[feature] add compilation to WASM,portability,"Hi,
I see that it's already possible to build the project with llvm. It would be great to be able to compile it to WASM (via llvm & https://emscripten.org/index.html).
Context:
I wanted to see if I could compile JQ to WASM to embed it into a html page to recreate jqplay.org _without_ a back-end to handle the jq queries. Everything would happen in the browser.
",True,"[feature] add compilation to WASM - Hi,
I see that it's already possible to build the project with llvm. It would be great to be able to compile it to WASM (via llvm & https://emscripten.org/index.html).
Context:
I wanted to see if I could compile JQ to WASM to embed it into a html page to recreate jqplay.org _without_ a back-end to handle the jq queries. Everything would happen in the browser.
",1, add compilation to wasm hi i see that it s already possible to build the project with llvm it would be great to be able to compile it to wasm via llvm context i wanted to see if i could compile jq to wasm to embed it into a html page to recreate jqplay org without a back end to handle the jq queries everything would happen in the browser ,1
381988,26481298237.0,IssuesEvent,2023-01-17 14:52:49,Cloud-Drift/clouddrift,https://api.github.com/repos/Cloud-Drift/clouddrift,closed,Building docs fails,bug documentation,"See https://github.com/Cloud-Drift/clouddrift/actions/runs/3935783550/jobs/6731763496.
The relevant bit is:
```
Theme error:
An error happened in rendering the page api.
Reason: UndefinedError(""'logo' is undefined"")
make: *** [Makefile:20: html] Error 2
Error: Process completed with exit code 2.
```
This error goes away for me locally after I have commented out the `html_theme_options` in docs/conf.py. However, it doesn't seem to go away in GitHub Actions and I don't understand why.
@philippemiron do you have an idea?",1.0,"Building docs fails - See https://github.com/Cloud-Drift/clouddrift/actions/runs/3935783550/jobs/6731763496.
The relevant bit is:
```
Theme error:
An error happened in rendering the page api.
Reason: UndefinedError(""'logo' is undefined"")
make: *** [Makefile:20: html] Error 2
Error: Process completed with exit code 2.
```
This error goes away for me locally after I have commented out the `html_theme_options` in docs/conf.py. However, it doesn't seem to go away in GitHub Actions and I don't understand why.
@philippemiron do you have an idea?",0,building docs fails see the relevant bit is theme error an error happened in rendering the page api reason undefinederror logo is undefined make error error process completed with exit code this error goes away for me locally after i have commented out the html theme options in docs conf py however it doesn t seem to go away in github actions and i don t understand why philippemiron do you have an idea ,0
675,9037887409.0,IssuesEvent,2019-02-09 15:10:38,portacle/portacle,https://api.github.com/repos/portacle/portacle,closed,Cannot pass runtime options to sbcl,portability,"Any args passed to the sbcl launcher come after `--nosysinit` and `--userinit` which are toplevel options; runtime options cannot appear after toplevel options.
In this case, I wanted to change the dynamic space size.
I think the best solution might be to put all arguments before the arguments added by the portacle launcher, but am up for other possibilites.",True,"Cannot pass runtime options to sbcl - Any args passed to the sbcl launcher come after `--nosysinit` and `--userinit` which are toplevel options; runtime options cannot appear after toplevel options.
In this case, I wanted to change the dynamic space size.
I think the best solution might be to put all arguments before the arguments added by the portacle launcher, but am up for other possibilites.",1,cannot pass runtime options to sbcl any args passed to the sbcl launcher come after nosysinit and userinit which are toplevel options runtime options cannot appear after toplevel options in this case i wanted to change the dynamic space size i think the best solution might be to put all arguments before the arguments added by the portacle launcher but am up for other possibilites ,1
4732,3881791323.0,IssuesEvent,2016-04-13 07:02:26,lionheart/openradar-mirror,https://api.github.com/repos/lionheart/openradar-mirror,opened,20905192: Activating share extension while debugging it prevents it from showing up later,classification:ui/usability reproducible:always status:open,"#### Description
Summary:
If, while running an Xcode debug session with a share extension, you tap on More… and remove the share extension and re-add the share extension, it will not show up automatically next time the share UI is shown.
Steps to Reproduce:
1) Create a new Xcode iOS Project
2) Add a share extension target
3) Activate the share extension scheme
4) Build and run
5) Choose Photos.app
6) Select a photo
7) Tap share button
8) Tap More… button
9) Activate the new share extension scheme
10) Press done
11) Press cancel to cancel sharing
12) Press Share button again
Expected Results:
The new share extension is visible in the default list, and should be CHECKED in the “More…” list
Actual Results:
The new share extension is NOT visible in the default list, and is CHECKED in the “More…” list. Changing any of the switches and tapping Done will show the new share extension in the default list.
Notes:
Provide additional information, such as references to related problems, workarounds and relevant attachments.
-
Product Version: 8.3
Created: 2015-05-11T21:32:48.944220
Originated: 2015-05-11T14:32:00
Open Radar Link: http://www.openradar.me/20905192",True,"20905192: Activating share extension while debugging it prevents it from showing up later - #### Description
Summary:
If, while running an Xcode debug session with a share extension, you tap on More… and remove the share extension and re-add the share extension, it will not show up automatically next time the share UI is shown.
Steps to Reproduce:
1) Create a new Xcode iOS Project
2) Add a share extension target
3) Activate the share extension scheme
4) Build and run
5) Choose Photos.app
6) Select a photo
7) Tap share button
8) Tap More… button
9) Activate the new share extension scheme
10) Press done
11) Press cancel to cancel sharing
12) Press Share button again
Expected Results:
The new share extension is visible in the default list, and should be CHECKED in the “More…” list
Actual Results:
The new share extension is NOT visible in the default list, and is CHECKED in the “More…” list. Changing any of the switches and tapping Done will show the new share extension in the default list.
Notes:
Provide additional information, such as references to related problems, workarounds and relevant attachments.
-
Product Version: 8.3
Created: 2015-05-11T21:32:48.944220
Originated: 2015-05-11T14:32:00
Open Radar Link: http://www.openradar.me/20905192",0, activating share extension while debugging it prevents it from showing up later description summary if while running an xcode debug session with a share extension you tap on more… and remove the share extension and re add the share extension it will not show up automatically next time the share ui is shown steps to reproduce create a new xcode ios project add a share extension target activate the share extension scheme build and run choose photos app select a photo tap share button tap more… button activate the new share extension scheme press done press cancel to cancel sharing press share button again expected results the new share extension is visible in the default list and should be checked in the “more…” list actual results the new share extension is not visible in the default list and is checked in the “more…” list changing any of the switches and tapping done will show the new share extension in the default list notes provide additional information such as references to related problems workarounds and relevant attachments product version created originated open radar link ,0
142178,19074163373.0,IssuesEvent,2021-11-27 13:06:07,atlsecsrv-net-atlsecsrv-com/code.visualstudio,https://api.github.com/repos/atlsecsrv-net-atlsecsrv-com/code.visualstudio,closed,"WS-2019-0063 (High) detected in js-yaml-3.7.0.tgz, js-yaml-3.12.1.tgz",security vulnerability,"## WS-2019-0063 - High Severity Vulnerability
Vulnerable Libraries - js-yaml-3.7.0.tgz , js-yaml-3.12.1.tgz
js-yaml-3.7.0.tgz
YAML 1.2 parser and serializer
Library home page: https://registry.npmjs.org/js-yaml/-/js-yaml-3.7.0.tgz
Path to dependency file: /tmp/ws-scm/atlsecsrv-net-a-atlsecsrv.com/package.json
Path to vulnerable library: /tmp/ws-scm/atlsecsrv-net-a-atlsecsrv.com/node_modules/js-yaml
Dependency Hierarchy:
- gulp-cssnano-2.1.3.tgz (Root Library)
- cssnano-3.10.0.tgz
- postcss-svgo-2.1.6.tgz
- svgo-0.7.2.tgz
- :x: **js-yaml-3.7.0.tgz** (Vulnerable Library)
js-yaml-3.12.1.tgz
YAML 1.2 parser and serializer
Library home page: https://registry.npmjs.org/js-yaml/-/js-yaml-3.12.1.tgz
Path to dependency file: /tmp/ws-scm/atlsecsrv-net-a-atlsecsrv.com/package.json
Path to vulnerable library: /tmp/ws-scm/atlsecsrv-net-a-atlsecsrv.com/node_modules/js-yaml
Dependency Hierarchy:
- gulp-eslint-5.0.0.tgz (Root Library)
- eslint-5.13.0.tgz
- :x: **js-yaml-3.12.1.tgz** (Vulnerable Library)
Found in HEAD commit: a1479f17f72992a58ef6c45317028a2b0f60a97a
Found in base branch: master
Vulnerability Details
Js-yaml prior to 3.13.1 are vulnerable to Code Injection. The load() function may execute arbitrary code injected through a malicious YAML file.
Publish Date: 2019-04-05
URL: WS-2019-0063
CVSS 3 Score Details (8.1 )
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
For more information on CVSS3 Scores, click here .
Suggested Fix
Type: Upgrade version
Origin: https://www.npmjs.com/advisories/813
Release Date: 2019-04-05
Fix Resolution: js-yaml - 3.13.1
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"WS-2019-0063 (High) detected in js-yaml-3.7.0.tgz, js-yaml-3.12.1.tgz - ## WS-2019-0063 - High Severity Vulnerability
Vulnerable Libraries - js-yaml-3.7.0.tgz , js-yaml-3.12.1.tgz
js-yaml-3.7.0.tgz
YAML 1.2 parser and serializer
Library home page: https://registry.npmjs.org/js-yaml/-/js-yaml-3.7.0.tgz
Path to dependency file: /tmp/ws-scm/atlsecsrv-net-a-atlsecsrv.com/package.json
Path to vulnerable library: /tmp/ws-scm/atlsecsrv-net-a-atlsecsrv.com/node_modules/js-yaml
Dependency Hierarchy:
- gulp-cssnano-2.1.3.tgz (Root Library)
- cssnano-3.10.0.tgz
- postcss-svgo-2.1.6.tgz
- svgo-0.7.2.tgz
- :x: **js-yaml-3.7.0.tgz** (Vulnerable Library)
js-yaml-3.12.1.tgz
YAML 1.2 parser and serializer
Library home page: https://registry.npmjs.org/js-yaml/-/js-yaml-3.12.1.tgz
Path to dependency file: /tmp/ws-scm/atlsecsrv-net-a-atlsecsrv.com/package.json
Path to vulnerable library: /tmp/ws-scm/atlsecsrv-net-a-atlsecsrv.com/node_modules/js-yaml
Dependency Hierarchy:
- gulp-eslint-5.0.0.tgz (Root Library)
- eslint-5.13.0.tgz
- :x: **js-yaml-3.12.1.tgz** (Vulnerable Library)
Found in HEAD commit: a1479f17f72992a58ef6c45317028a2b0f60a97a
Found in base branch: master
Vulnerability Details
Js-yaml prior to 3.13.1 are vulnerable to Code Injection. The load() function may execute arbitrary code injected through a malicious YAML file.
Publish Date: 2019-04-05
URL: WS-2019-0063
CVSS 3 Score Details (8.1 )
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
For more information on CVSS3 Scores, click here .
Suggested Fix
Type: Upgrade version
Origin: https://www.npmjs.com/advisories/813
Release Date: 2019-04-05
Fix Resolution: js-yaml - 3.13.1
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,ws high detected in js yaml tgz js yaml tgz ws high severity vulnerability vulnerable libraries js yaml tgz js yaml tgz js yaml tgz yaml parser and serializer library home page a href path to dependency file tmp ws scm atlsecsrv net a atlsecsrv com package json path to vulnerable library tmp ws scm atlsecsrv net a atlsecsrv com node modules js yaml dependency hierarchy gulp cssnano tgz root library cssnano tgz postcss svgo tgz svgo tgz x js yaml tgz vulnerable library js yaml tgz yaml parser and serializer library home page a href path to dependency file tmp ws scm atlsecsrv net a atlsecsrv com package json path to vulnerable library tmp ws scm atlsecsrv net a atlsecsrv com node modules js yaml dependency hierarchy gulp eslint tgz root library eslint tgz x js yaml tgz vulnerable library found in head commit a href found in base branch master vulnerability details js yaml prior to are vulnerable to code injection the load function may execute arbitrary code injected through a malicious yaml file publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution js yaml step up your open source security game with whitesource ,0
477029,13754309964.0,IssuesEvent,2020-10-06 16:45:42,microsoft/PowerToys,https://api.github.com/repos/microsoft/PowerToys,closed,"[Functional: Settings>Remap key>Add button]: In 'Press a key on selected Keyboard' dialog on holding 'Enter' to continue, the PowerToys application is getting closed.",Accessibility [E+D] Issue-Bug Priority-0 Severity-Crash,"[Power Toys Settings-Keyboard Manager>Remap Key]
User Experience:
This will impact all the users as they will not able to use this feature with keyboard as application is getting closed.
Test Environment:
""OS Version: 20221.1000
App Name: Power Toy Preview
App Version: v0.23.0
Screen Reader: Narrator""
Repro-steps:
""1. Open Power Toys Settings App.
2. Navigate to Keyboard Manager list item and activate it.
3. Navigate to Remap a key button present in right pane and activate it.
4. Remap Key window will open.
5. Navigate to 'Type' button in window and activate it. A 'Press a key in selected keyboard' pane will open.
6. Hold 'enter' key to continue and verify the issue.
""
Actual Result:
""In 'Press a key on selected Keyboard' dialog on holding 'Enter' to continue, the PowerToys application is getting closed.
Note:
Same issue is repro in 'Remap a Shortcut'.
Same issue is repro on holding Esc key.""
Expected Result:
In 'Press a key on selected keyboard' dialog on holding 'Enter' to continue, the PowerToys application should not be getting closed. It should continue and save the key.
[18_Remap k key_Functional_on holding enter button app is getting close.zip](https://github.com/microsoft/PowerToys/files/5329095/18_Remap.k.key_Functional_on.holding.enter.button.app.is.getting.close.zip)",1.0,"[Functional: Settings>Remap key>Add button]: In 'Press a key on selected Keyboard' dialog on holding 'Enter' to continue, the PowerToys application is getting closed. - [Power Toys Settings-Keyboard Manager>Remap Key]
User Experience:
This will impact all the users as they will not able to use this feature with keyboard as application is getting closed.
Test Environment:
""OS Version: 20221.1000
App Name: Power Toy Preview
App Version: v0.23.0
Screen Reader: Narrator""
Repro-steps:
""1. Open Power Toys Settings App.
2. Navigate to Keyboard Manager list item and activate it.
3. Navigate to Remap a key button present in right pane and activate it.
4. Remap Key window will open.
5. Navigate to 'Type' button in window and activate it. A 'Press a key in selected keyboard' pane will open.
6. Hold 'enter' key to continue and verify the issue.
""
Actual Result:
""In 'Press a key on selected Keyboard' dialog on holding 'Enter' to continue, the PowerToys application is getting closed.
Note:
Same issue is repro in 'Remap a Shortcut'.
Same issue is repro on holding Esc key.""
Expected Result:
In 'Press a key on selected keyboard' dialog on holding 'Enter' to continue, the PowerToys application should not be getting closed. It should continue and save the key.
[18_Remap k key_Functional_on holding enter button app is getting close.zip](https://github.com/microsoft/PowerToys/files/5329095/18_Remap.k.key_Functional_on.holding.enter.button.app.is.getting.close.zip)",0, in press a key on selected keyboard dialog on holding enter to continue the powertoys application is getting closed user experience this will impact all the users as they will not able to use this feature with keyboard as application is getting closed test environment os version app name power toy preview app version screen reader narrator repro steps open power toys settings app navigate to keyboard manager list item and activate it navigate to remap a key button present in right pane and activate it remap key window will open navigate to type button in window and activate it a press a key in selected keyboard pane will open hold enter key to continue and verify the issue actual result in press a key on selected keyboard dialog on holding enter to continue the powertoys application is getting closed note same issue is repro in remap a shortcut same issue is repro on holding esc key expected result in press a key on selected keyboard dialog on holding enter to continue the powertoys application should not be getting closed it should continue and save the key ,0
579,7986098854.0,IssuesEvent,2018-07-19 00:00:27,rust-lang-nursery/stdsimd,https://api.github.com/repos/rust-lang-nursery/stdsimd,closed,Casting and width promotion,A-portable,"Currently, we define the following casts with `as_...` methods:
```
define_casts!(
(f32x2, f64x2, as_f64x2),
(f32x2, u32x2, as_u32x2),
(f32x2, i32x2, as_i32x2),
(u32x2, f32x2, as_f32x2),
(u32x2, i32x2, as_i32x2),
(i32x2, f32x2, as_f32x2),
(i32x2, u32x2, as_u32x2),
(u16x4, i16x4, as_i16x4),
(i16x4, u16x4, as_u16x4),
(u8x8, i8x8, as_i8x8),
(i8x8, u8x8, as_u8x8),
);
```
`simd_cast` can be used to cast between types of different widths, where each lane gets promoted. For example, for some ARM implementations I've needed:
```rust
define_casts!(
(i8x8, i16x8, as_i16x8),
(i16x4, i32x4, as_i32x4),
(i32x2, i64x2, as_i64x2),
(u8x8, u16x8, as_u16x8),
(u16x4, u32x4, as_u32x4),
(u32x2, u64x2, as_u64x2)
);
```
@BurntSushi how do you envision these casts? When should we add an `as_...` function? Should we use `simd_cast` in the library internally? Or should we do all the casts using calls to the intrinsics that perform them ?",True,"Casting and width promotion - Currently, we define the following casts with `as_...` methods:
```
define_casts!(
(f32x2, f64x2, as_f64x2),
(f32x2, u32x2, as_u32x2),
(f32x2, i32x2, as_i32x2),
(u32x2, f32x2, as_f32x2),
(u32x2, i32x2, as_i32x2),
(i32x2, f32x2, as_f32x2),
(i32x2, u32x2, as_u32x2),
(u16x4, i16x4, as_i16x4),
(i16x4, u16x4, as_u16x4),
(u8x8, i8x8, as_i8x8),
(i8x8, u8x8, as_u8x8),
);
```
`simd_cast` can be used to cast between types of different widths, where each lane gets promoted. For example, for some ARM implementations I've needed:
```rust
define_casts!(
(i8x8, i16x8, as_i16x8),
(i16x4, i32x4, as_i32x4),
(i32x2, i64x2, as_i64x2),
(u8x8, u16x8, as_u16x8),
(u16x4, u32x4, as_u32x4),
(u32x2, u64x2, as_u64x2)
);
```
@BurntSushi how do you envision these casts? When should we add an `as_...` function? Should we use `simd_cast` in the library internally? Or should we do all the casts using calls to the intrinsics that perform them ?",1,casting and width promotion currently we define the following casts with as methods define casts as as as as as as as as as as as simd cast can be used to cast between types of different widths where each lane gets promoted for example for some arm implementations i ve needed rust define casts as as as as as as burntsushi how do you envision these casts when should we add an as function should we use simd cast in the library internally or should we do all the casts using calls to the intrinsics that perform them ,1
135,2534452389.0,IssuesEvent,2015-01-24 23:57:34,RobDixonIII/Bloom,https://api.github.com/repos/RobDixonIII/Bloom,closed,Stub Browser Modules,infrastructure,"In the Browser/Modules solution folder stub the following module projects:
* Bloom.Browser.Menu
* Bloom.Browser.Taxonomies
* Bloom.Browser.Library
* Bloom.Browser.Artist
* Bloom.Browser.Person
* Bloom.Browser.Album
* Bloom.Browser.Song
* Bloom.Browser.Playlist
This includes setting up the properties and assembly info, and NuGet references to Unity and Prism.
",1.0,"Stub Browser Modules - In the Browser/Modules solution folder stub the following module projects:
* Bloom.Browser.Menu
* Bloom.Browser.Taxonomies
* Bloom.Browser.Library
* Bloom.Browser.Artist
* Bloom.Browser.Person
* Bloom.Browser.Album
* Bloom.Browser.Song
* Bloom.Browser.Playlist
This includes setting up the properties and assembly info, and NuGet references to Unity and Prism.
",0,stub browser modules in the browser modules solution folder stub the following module projects bloom browser menu bloom browser taxonomies bloom browser library bloom browser artist bloom browser person bloom browser album bloom browser song bloom browser playlist this includes setting up the properties and assembly info and nuget references to unity and prism ,0
1855,27427642392.0,IssuesEvent,2023-03-01 21:42:02,Azure/azure-functions-host,https://api.github.com/repos/Azure/azure-functions-host,closed,Add logs for host startup cancellation,Supportability,"In a recent CRI (355435073), we saw a host startup cancellation log but there was no exception or other details/information about what might have caused this cancellation. In this specific case, it was most likely due to to customer code startup issues which resulted in the host taking too long to start and therefore being cancelled.

Looking at the host codebase, we see that this is most likely not a host cancellation as we did not see this log `Initialization cancellation requested by runtime` at all in the Function Logs. Therefore, it looks like the exception is being swallowed as we're not logging or throwing at this point, which could potentially lead to a scenario where the host is running but not working (in a state that cannot be recovered from without a FunctionApp restart).
```csharp
try
{
await StartHostAsync(tokenSource.Token);
}
catch (OperationCanceledException)
{
if (cancellationToken.IsCancellationRequested)
{
_logger.ScriptHostServiceInitCanceledByRuntime(); // ""Initialization cancellation requested by runtime""
throw;
}
// If the exception was triggered by our loop cancellation token, just ignore as
// it doesn't indicate an issue.
}
```
Code: [StartAsync](https://github.com/Azure/azure-functions-host/blob/901d8b6d5859aad426c65053567e1f6fd5b98275/src/WebJobs.Script.WebHost/WebJobsScriptHostService.cs#L177-L191)
- [x] Add logging to the above code so we can see what exception might be happening
- [ ] Discuss if we should throw the exception here / see if we should be handling this differently
",True,"Add logs for host startup cancellation - In a recent CRI (355435073), we saw a host startup cancellation log but there was no exception or other details/information about what might have caused this cancellation. In this specific case, it was most likely due to to customer code startup issues which resulted in the host taking too long to start and therefore being cancelled.

Looking at the host codebase, we see that this is most likely not a host cancellation as we did not see this log `Initialization cancellation requested by runtime` at all in the Function Logs. Therefore, it looks like the exception is being swallowed as we're not logging or throwing at this point, which could potentially lead to a scenario where the host is running but not working (in a state that cannot be recovered from without a FunctionApp restart).
```csharp
try
{
await StartHostAsync(tokenSource.Token);
}
catch (OperationCanceledException)
{
if (cancellationToken.IsCancellationRequested)
{
_logger.ScriptHostServiceInitCanceledByRuntime(); // ""Initialization cancellation requested by runtime""
throw;
}
// If the exception was triggered by our loop cancellation token, just ignore as
// it doesn't indicate an issue.
}
```
Code: [StartAsync](https://github.com/Azure/azure-functions-host/blob/901d8b6d5859aad426c65053567e1f6fd5b98275/src/WebJobs.Script.WebHost/WebJobsScriptHostService.cs#L177-L191)
- [x] Add logging to the above code so we can see what exception might be happening
- [ ] Discuss if we should throw the exception here / see if we should be handling this differently
",1,add logs for host startup cancellation in a recent cri we saw a host startup cancellation log but there was no exception or other details information about what might have caused this cancellation in this specific case it was most likely due to to customer code startup issues which resulted in the host taking too long to start and therefore being cancelled looking at the host codebase we see that this is most likely not a host cancellation as we did not see this log initialization cancellation requested by runtime at all in the function logs therefore it looks like the exception is being swallowed as we re not logging or throwing at this point which could potentially lead to a scenario where the host is running but not working in a state that cannot be recovered from without a functionapp restart csharp try await starthostasync tokensource token catch operationcanceledexception if cancellationtoken iscancellationrequested logger scripthostserviceinitcanceledbyruntime initialization cancellation requested by runtime throw if the exception was triggered by our loop cancellation token just ignore as it doesn t indicate an issue code add logging to the above code so we can see what exception might be happening discuss if we should throw the exception here see if we should be handling this differently ,1
1234,16472034718.0,IssuesEvent,2021-05-23 15:59:02,recp/cglm,https://api.github.com/repos/recp/cglm,closed,"TODO: Make cglm work with Metal, Vulkan and DirectX",clip-space feature request feedback wanted help wanted important major-update portability,"**cglm** must support Metal and Vulkan (and maybe DirectX). To do this **cglm** may provide alternative functions for alternative NDC coordinates. Or we could do that with preprocessor macros. But providing extra functions will provide ability to switch between graphics APIs without rebuilding the code.
Resources:
1. https://developer.apple.com/metal/Metal-Shading-Language-Specification.pdf
2. https://metashapes.com/blog/opengl-metal-projection-matrix-problem/",True,"TODO: Make cglm work with Metal, Vulkan and DirectX - **cglm** must support Metal and Vulkan (and maybe DirectX). To do this **cglm** may provide alternative functions for alternative NDC coordinates. Or we could do that with preprocessor macros. But providing extra functions will provide ability to switch between graphics APIs without rebuilding the code.
Resources:
1. https://developer.apple.com/metal/Metal-Shading-Language-Specification.pdf
2. https://metashapes.com/blog/opengl-metal-projection-matrix-problem/",1,todo make cglm work with metal vulkan and directx cglm must support metal and vulkan and maybe directx to do this cglm may provide alternative functions for alternative ndc coordinates or we could do that with preprocessor macros but providing extra functions will provide ability to switch between graphics apis without rebuilding the code resources ,1
618213,19429394365.0,IssuesEvent,2021-12-21 10:09:28,bounswe/2021SpringGroup4,https://api.github.com/repos/bounswe/2021SpringGroup4,closed,Frontend: Add Map Functionality for Location Selection on Event Create Page ,Priority: High Status: Completed Type: Development Frontend,Google Maps will be added to the Event Creation Page. Users should be able to select location from map. A marker should be placed on the clicked place; latitude and longitude values and address should be sent to the Backend api events endpoint ,1.0,Frontend: Add Map Functionality for Location Selection on Event Create Page - Google Maps will be added to the Event Creation Page. Users should be able to select location from map. A marker should be placed on the clicked place; latitude and longitude values and address should be sent to the Backend api events endpoint ,0,frontend add map functionality for location selection on event create page google maps will be added to the event creation page users should be able to select location from map a marker should be placed on the clicked place latitude and longitude values and address should be sent to the backend api events endpoint ,0
115691,11885171856.0,IssuesEvent,2020-03-27 19:03:34,fabricio-garcia/pongon-a-virus,https://api.github.com/repos/fabricio-garcia/pongon-a-virus,opened,Scoring System,documentation enhancement,"- [ ] You must implement a **scoring system**, so that when the user completes a game they are given a score (number)
- [ ] You should [use this service API](https://www.notion.so/microverse/Leaderboard-API-service-24c0c3c116974ac49488d4eb0267ade3) to save the score associated to the game and the user name, and display a leaderboard (as a Phaser scene)",1.0,"Scoring System - - [ ] You must implement a **scoring system**, so that when the user completes a game they are given a score (number)
- [ ] You should [use this service API](https://www.notion.so/microverse/Leaderboard-API-service-24c0c3c116974ac49488d4eb0267ade3) to save the score associated to the game and the user name, and display a leaderboard (as a Phaser scene)",0,scoring system you must implement a scoring system so that when the user completes a game they are given a score number you should to save the score associated to the game and the user name and display a leaderboard as a phaser scene ,0
561,7860944578.0,IssuesEvent,2018-06-21 21:47:58,chapel-lang/chapel,https://api.github.com/repos/chapel-lang/chapel,opened,ZMQ uses incompatible cast in send,area: Modules type: Bug type: Portability,"When compiling a ZMQ program with CCE (which is more strict on its type checking than gcc),
we encounter a failure on line 771 of the ZMQ module
(`zmq_msg_init_data(msg, copy.c_str():c_void_ptr,
copy.length:size_t, c_ptrTo(free_helper),
c_nil)` with the `c_ptrTo(free_helper)` portion being ""incompatible"" with c_fn_ptr.",True,"ZMQ uses incompatible cast in send - When compiling a ZMQ program with CCE (which is more strict on its type checking than gcc),
we encounter a failure on line 771 of the ZMQ module
(`zmq_msg_init_data(msg, copy.c_str():c_void_ptr,
copy.length:size_t, c_ptrTo(free_helper),
c_nil)` with the `c_ptrTo(free_helper)` portion being ""incompatible"" with c_fn_ptr.",1,zmq uses incompatible cast in send when compiling a zmq program with cce which is more strict on its type checking than gcc we encounter a failure on line of the zmq module zmq msg init data msg copy c str c void ptr copy length size t c ptrto free helper c nil with the c ptrto free helper portion being incompatible with c fn ptr ,1
753,10133121864.0,IssuesEvent,2019-08-02 01:46:00,microsoft/BotBuilder-Samples,https://api.github.com/repos/microsoft/BotBuilder-Samples,opened,[Teams] Provide a sample that shows how to do proactive messaging,4.6 P0 supportability teams,"## Sample information
1. Sample type: samples
2. Sample language:
[ ] dotnetcore
[ ] nodejs
[ ] python
3. Sample name:
## Describe the bug
Sample should also show how to rate limit/throttle handling. Support for C#, JS, and Python is required
--
",True,"[Teams] Provide a sample that shows how to do proactive messaging - ## Sample information
1. Sample type: samples
2. Sample language:
[ ] dotnetcore
[ ] nodejs
[ ] python
3. Sample name:
## Describe the bug
Sample should also show how to rate limit/throttle handling. Support for C#, JS, and Python is required
--
",1, provide a sample that shows how to do proactive messaging sample information sample type samples sample language dotnetcore nodejs python sample name describe the bug sample should also show how to rate limit throttle handling support for c js and python is required ,1
132253,18266267517.0,IssuesEvent,2021-10-04 08:49:24,artsking/linux-3.0.35_CVE-2020-15436_withPatch,https://api.github.com/repos/artsking/linux-3.0.35_CVE-2020-15436_withPatch,closed,CVE-2015-5366 (Medium) detected in linux-stable-rtv3.8.6 - autoclosed,security vulnerability,"## CVE-2015-5366 - Medium Severity Vulnerability
Vulnerable Library - linux-stable-rtv3.8.6
Julia Cartwright's fork of linux-stable-rt.git
Library home page: https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git
Found in HEAD commit: 594a70cb9871ddd73cf61197bb1a2a1b1777a7ae
Found in base branch: master
Vulnerable Source Files (1)
/net/ipv4/udp.c
Vulnerability Details
The (1) udp_recvmsg and (2) udpv6_recvmsg functions in the Linux kernel before 4.0.6 provide inappropriate -EAGAIN return values, which allows remote attackers to cause a denial of service (EPOLLET epoll application read outage) via an incorrect checksum in a UDP packet, a different vulnerability than CVE-2015-5364.
Publish Date: 2015-08-31
URL: CVE-2015-5366
CVSS 3 Score Details (5.5 )
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: N/A
- Attack Complexity: N/A
- Privileges Required: N/A
- User Interaction: N/A
- Scope: N/A
- Impact Metrics:
- Confidentiality Impact: N/A
- Integrity Impact: N/A
- Availability Impact: N/A
For more information on CVSS3 Scores, click here .
Suggested Fix
Type: Upgrade version
Origin: https://www.linuxkernelcves.com/cves/CVE-2015-5366
Release Date: 2015-08-31
Fix Resolution: v4.1-rc7,v3.12.44,v3.14.45,v3.16.35,v3.18.17,v3.2.70
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2015-5366 (Medium) detected in linux-stable-rtv3.8.6 - autoclosed - ## CVE-2015-5366 - Medium Severity Vulnerability
Vulnerable Library - linux-stable-rtv3.8.6
Julia Cartwright's fork of linux-stable-rt.git
Library home page: https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git
Found in HEAD commit: 594a70cb9871ddd73cf61197bb1a2a1b1777a7ae
Found in base branch: master
Vulnerable Source Files (1)
/net/ipv4/udp.c
Vulnerability Details
The (1) udp_recvmsg and (2) udpv6_recvmsg functions in the Linux kernel before 4.0.6 provide inappropriate -EAGAIN return values, which allows remote attackers to cause a denial of service (EPOLLET epoll application read outage) via an incorrect checksum in a UDP packet, a different vulnerability than CVE-2015-5364.
Publish Date: 2015-08-31
URL: CVE-2015-5366
CVSS 3 Score Details (5.5 )
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: N/A
- Attack Complexity: N/A
- Privileges Required: N/A
- User Interaction: N/A
- Scope: N/A
- Impact Metrics:
- Confidentiality Impact: N/A
- Integrity Impact: N/A
- Availability Impact: N/A
For more information on CVSS3 Scores, click here .
Suggested Fix
Type: Upgrade version
Origin: https://www.linuxkernelcves.com/cves/CVE-2015-5366
Release Date: 2015-08-31
Fix Resolution: v4.1-rc7,v3.12.44,v3.14.45,v3.16.35,v3.18.17,v3.2.70
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve medium detected in linux stable autoclosed cve medium severity vulnerability vulnerable library linux stable julia cartwright s fork of linux stable rt git library home page a href found in head commit a href found in base branch master vulnerable source files net udp c vulnerability details the udp recvmsg and recvmsg functions in the linux kernel before provide inappropriate eagain return values which allows remote attackers to cause a denial of service epollet epoll application read outage via an incorrect checksum in a udp packet a different vulnerability than cve publish date url a href cvss score details base score metrics exploitability metrics attack vector n a attack complexity n a privileges required n a user interaction n a scope n a impact metrics confidentiality impact n a integrity impact n a availability impact n a for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource ,0
1232,16441628166.0,IssuesEvent,2021-05-20 14:56:30,AzureAD/microsoft-authentication-library-for-dotnet,https://api.github.com/repos/AzureAD/microsoft-authentication-library-for-dotnet,closed,[Feature Request] FindAccessToken logic should log the number of access tokens,Supportability,"4.26
We used to have a log message like: `Deserializing X Items from the Token Cache`
This was removed from the Log.Info logging to improve perf. However, it was a good investigation tool and it is worth adding it back.
https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/blob/02e4527df191e0af5eb5feacf8b89fe76d0ec8c7/src/client/Microsoft.Identity.Client/TokenCache.ITokenCacheInternal.cs#L319
Generally using IReadOnlyList instead of IEnumerable will avoid perf issues with calling Count.",True,"[Feature Request] FindAccessToken logic should log the number of access tokens - 4.26
We used to have a log message like: `Deserializing X Items from the Token Cache`
This was removed from the Log.Info logging to improve perf. However, it was a good investigation tool and it is worth adding it back.
https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/blob/02e4527df191e0af5eb5feacf8b89fe76d0ec8c7/src/client/Microsoft.Identity.Client/TokenCache.ITokenCacheInternal.cs#L319
Generally using IReadOnlyList instead of IEnumerable will avoid perf issues with calling Count.",1, findaccesstoken logic should log the number of access tokens we used to have a log message like deserializing x items from the token cache this was removed from the log info logging to improve perf however it was a good investigation tool and it is worth adding it back generally using ireadonlylist instead of ienumerable will avoid perf issues with calling count ,1
1624,23365943991.0,IssuesEvent,2022-08-10 15:22:03,MicrosoftDocs/sql-docs,https://api.github.com/repos/MicrosoftDocs/sql-docs,closed,It does not work on Azure SQL Server,sql/prod supportability/tech doc-bug Pri1,"[Nhập phản hồi vào đây]
It does not work on Azure SQL Server due to they don't support USE command
---
#### Chi tiết tài liệu
⚠ *Không chỉnh sửa phần này. Điều này là bắt buộc cho docs.microsoft.com ➟ Vấn đề khi liên kết GitHub.*
* ID: f6cb23e7-dfcf-8e7e-9042-aa4ce18fdfe3
* Version Independent ID: 71341cc5-767d-74d1-d0d9-81c1c87540ad
* Content: [Delete a Database - SQL Server](https://docs.microsoft.com/vi-vn/sql/relational-databases/databases/delete-a-database?view=azuresqldb-current)
* Content Source: [docs/relational-databases/databases/delete-a-database.md](https://github.com/MicrosoftDocs/sql-docs/blob/live/docs/relational-databases/databases/delete-a-database.md)
* Product: **sql**
* Technology: **supportability**
* GitHub Login: @WilliamDAssafMSFT
* Microsoft Alias: **wiassaf**",True,"It does not work on Azure SQL Server - [Nhập phản hồi vào đây]
It does not work on Azure SQL Server due to they don't support USE command
---
#### Chi tiết tài liệu
⚠ *Không chỉnh sửa phần này. Điều này là bắt buộc cho docs.microsoft.com ➟ Vấn đề khi liên kết GitHub.*
* ID: f6cb23e7-dfcf-8e7e-9042-aa4ce18fdfe3
* Version Independent ID: 71341cc5-767d-74d1-d0d9-81c1c87540ad
* Content: [Delete a Database - SQL Server](https://docs.microsoft.com/vi-vn/sql/relational-databases/databases/delete-a-database?view=azuresqldb-current)
* Content Source: [docs/relational-databases/databases/delete-a-database.md](https://github.com/MicrosoftDocs/sql-docs/blob/live/docs/relational-databases/databases/delete-a-database.md)
* Product: **sql**
* Technology: **supportability**
* GitHub Login: @WilliamDAssafMSFT
* Microsoft Alias: **wiassaf**",1,it does not work on azure sql server it does not work on azure sql server due to they don t support use command chi tiết tài liệu ⚠ không chỉnh sửa phần này điều này là bắt buộc cho docs microsoft com ➟ vấn đề khi liên kết github id dfcf version independent id content content source product sql technology supportability github login williamdassafmsft microsoft alias wiassaf ,1
726,9747218731.0,IssuesEvent,2019-06-03 13:58:38,kyma-project/kyma,https://api.github.com/repos/kyma-project/kyma,closed,Fix Prometheus and function tests for backup-restore,quality/observability quality/robustness quality/supportability,"**AC**
- [x] Prometheus tests are green for back-up restore tests
- [x] Function tests are green for back-up restore tests",True,"Fix Prometheus and function tests for backup-restore - **AC**
- [x] Prometheus tests are green for back-up restore tests
- [x] Function tests are green for back-up restore tests",1,fix prometheus and function tests for backup restore ac prometheus tests are green for back up restore tests function tests are green for back up restore tests,1
694900,23835617462.0,IssuesEvent,2022-09-06 05:25:28,HughCraig/TLCMap,https://api.github.com/repos/HughCraig/TLCMap,opened,Shape search errors,bug priority 1 Scope 2,"Drawing a square works eg:
15765 results for this search:
https://test.tlcmap.org/ghap/search?searchausgaz=on&searchpublicdatasets=on&_token=BK95dRa0nOKBgUFxPbm15GJBuftFFvRWydhQBYiz&bbox=150.733362%2C-34.522218%2C151.96391%2C-33.648522
Draw a circle and search gives SQL error:
SQLSTATE[42883]: Undefined function: 7 ERROR: function st_point(numeric, numeric) does not exist LINE 1
Draw a polygon gives SQL error:
ERROR: function st_geomfromtext(unknown) does not exist LINE 1: ...""dataset"".""id"" and ""public"" = $1) and ST_CONTAINS(ST_GEOMFRO... ^ HINT: No function matches the given name and argument types.
Under advanced search 'Search within a KML Polygon' gives error
SQLSTATE[42883]: Undefined function: 7 ERROR: function st_geomfromtext(unknown) does not exist LINE 1: ...ate from ""gazetteer"".""register"" where ST_CONTAINS(ST_GEOMFRO... ^ HINT: No function matches the given name and argument types.
",1.0,"Shape search errors - Drawing a square works eg:
15765 results for this search:
https://test.tlcmap.org/ghap/search?searchausgaz=on&searchpublicdatasets=on&_token=BK95dRa0nOKBgUFxPbm15GJBuftFFvRWydhQBYiz&bbox=150.733362%2C-34.522218%2C151.96391%2C-33.648522
Draw a circle and search gives SQL error:
SQLSTATE[42883]: Undefined function: 7 ERROR: function st_point(numeric, numeric) does not exist LINE 1
Draw a polygon gives SQL error:
ERROR: function st_geomfromtext(unknown) does not exist LINE 1: ...""dataset"".""id"" and ""public"" = $1) and ST_CONTAINS(ST_GEOMFRO... ^ HINT: No function matches the given name and argument types.
Under advanced search 'Search within a KML Polygon' gives error
SQLSTATE[42883]: Undefined function: 7 ERROR: function st_geomfromtext(unknown) does not exist LINE 1: ...ate from ""gazetteer"".""register"" where ST_CONTAINS(ST_GEOMFRO... ^ HINT: No function matches the given name and argument types.
",0,shape search errors drawing a square works eg results for this search draw a circle and search gives sql error sqlstate undefined function error function st point numeric numeric does not exist line draw a polygon gives sql error error function st geomfromtext unknown does not exist line dataset id and public and st contains st geomfro hint no function matches the given name and argument types under advanced search search within a kml polygon gives error sqlstate undefined function error function st geomfromtext unknown does not exist line ate from gazetteer register where st contains st geomfro hint no function matches the given name and argument types ,0
28618,5311288930.0,IssuesEvent,2017-02-13 02:44:21,junichi11/netbeans-gitignore-io-plugin,https://api.github.com/repos/junichi11/netbeans-gitignore-io-plugin,closed,Plugin nbm does not contain plugin description,defect,"The nbm distributable does not contain any description metadata. This means when viewing the plugin on NetBeans update centre it is unclear what the plugin does. The provided link does go to the github site which does provide a reasonable description from the README.md
Please add a description to the ""OpenIDE-Module-Short-Description"", perhaps just the contents of the README.
",1.0,"Plugin nbm does not contain plugin description - The nbm distributable does not contain any description metadata. This means when viewing the plugin on NetBeans update centre it is unclear what the plugin does. The provided link does go to the github site which does provide a reasonable description from the README.md
Please add a description to the ""OpenIDE-Module-Short-Description"", perhaps just the contents of the README.
",0,plugin nbm does not contain plugin description the nbm distributable does not contain any description metadata this means when viewing the plugin on netbeans update centre it is unclear what the plugin does the provided link does go to the github site which does provide a reasonable description from the readme md please add a description to the openide module short description perhaps just the contents of the readme ,0
193,4018861716.0,IssuesEvent,2016-05-16 12:47:02,svaarala/duktape,https://api.github.com/repos/svaarala/duktape,opened,Make DUK_VERSION visible to duk_config.h,portability,"Currently `duk_config.h` is included in `duktape.h` before the API constants, so that `DUK_VERSION` is not visible in `duk_config.h`. It would be useful to expose `DUK_VERSION` to `duk_config.h` to allow the configuration file to react to the Duktape version; while it's not necessarily easy, this would allow a configuration file to handle multiple Duktape versions.",True,"Make DUK_VERSION visible to duk_config.h - Currently `duk_config.h` is included in `duktape.h` before the API constants, so that `DUK_VERSION` is not visible in `duk_config.h`. It would be useful to expose `DUK_VERSION` to `duk_config.h` to allow the configuration file to react to the Duktape version; while it's not necessarily easy, this would allow a configuration file to handle multiple Duktape versions.",1,make duk version visible to duk config h currently duk config h is included in duktape h before the api constants so that duk version is not visible in duk config h it would be useful to expose duk version to duk config h to allow the configuration file to react to the duktape version while it s not necessarily easy this would allow a configuration file to handle multiple duktape versions ,1
615147,19215708049.0,IssuesEvent,2021-12-07 09:15:37,geosolutions-it/MapStore2,https://api.github.com/repos/geosolutions-it/MapStore2,closed,Google Street View Plugin,Priority: High Accepted New Feature,"## Description
Provide a google street view plugin that provides the possibility to browse data from google street view (using an API-key).
## Acceptance criteria
- [ ] The plugin should be configurable with the API key from plugin or localConfig.json
- [ ] The plugin should not be delivered in the standard MapStore (because api-key is required), but can be enabled in contexts
- [ ] Configurations must be documented
## Other useful information
",1.0,"Google Street View Plugin - ## Description
Provide a google street view plugin that provides the possibility to browse data from google street view (using an API-key).
## Acceptance criteria
- [ ] The plugin should be configurable with the API key from plugin or localConfig.json
- [ ] The plugin should not be delivered in the standard MapStore (because api-key is required), but can be enabled in contexts
- [ ] Configurations must be documented
## Other useful information
",0,google street view plugin description provide a google street view plugin that provides the possibility to browse data from google street view using an api key acceptance criteria the plugin should be configurable with the api key from plugin or localconfig json the plugin should not be delivered in the standard mapstore because api key is required but can be enabled in contexts configurations must be documented other useful information ,0
521,7332722212.0,IssuesEvent,2018-03-05 17:06:26,jedisct1/libsodium,https://api.github.com/repos/jedisct1/libsodium,closed,Crash on Android x86 in SIMD instruction,portability,"Hi,
We're having some issues using libsodium 1.0.16 on Android, we're cross-compiling it for multiple architectures but we're having issues with x86.
We tried to hack the configure/Makefile to not use SIMD (and using --disable-asm), but couldn't quite get it to work.
Note that everything works fine for us on x86-64.
Our target x86 CPU (which is the Android emulator) doesn't support much in the way of SIMD instructions, so it trips and dies on the first one it sees.
Here's a backtrace and disassembly of the crash site:
```
* thread #31, name = 'io.myapp', stop reason = signal SIGSEGV: invalid address (fault address: 0x0)
* frame #0: 0x87bed4a7 libmyapp.so`SHA512_Transform [inlined] be64dec_vect(len=128) at hash_sha512_cp.c:0
frame #1: 0x87bed46a libmyapp.so`SHA512_Transform(state=0x88642800, block=""\x02M�;'��F�)\x90\xbaة\bJRE�b\x9c\x80\x94#G\x92�l\x1a\x89\xad܀"", W=0x88642510, S=) at hash_sha512_cp.c:119
frame #2: 0x87bef0bb libmyapp.so`crypto_hash_sha512_final [inlined] SHA512_Pad(state=) at hash_sha512_cp.c:191
frame #3: 0x87beee54 libmyapp.so`crypto_hash_sha512_final(state=0x88642800, out=""H2d\x88\xb6㪇t-d\x88"") at hash_sha512_cp.c:263
frame #4: 0x87bef34b libmyapp.so`crypto_hash_sha512(out=, in=, inlen=) at hash_sha512_cp.c:279
frame #5: 0x87bf0207 libmyapp.so`crypto_sign_ed25519_keypair [inlined] crypto_sign_ed25519_seed_keypair(pk=""����H2d\x88H2d\x88H2d\x88 -d\x88�6\xaa\x87H2d\x88H2d\x88H2d\x88\xb6㪇t-d\x88"", sk=""H2d\x88\xb6㪇t-d\x88"", seed=) at keypair.c:21
frame #6: 0x87bf01fc libmyapp.so`crypto_sign_ed25519_keypair(pk=, sk=) at keypair.c:43
frame #7: 0x87befcf0 libmyapp.so`crypto_sign_keypair(pk=, sk=) at crypto_sign.c:56
```
```
0x87bed494 <+68>: cmpl $0x10, %ebx
0x87bed497 <+71>: jne 0x87bed480 ; <+48> [inlined] load64_be + 2 at hash_sha512_cp.c:56
0x87bed499 <+73>: jmp 0x87bed58d ; <+317> [inlined] memcpy(void*, void const* pass_object_size0, unsigned int) at hash_sha512_cp.c:120
0x87bed49e <+78>: xorl %esi, %esi
0x87bed4a0 <+80>: movaps -0x169994(%eax), %xmm0
-> 0x87bed4a7 <+87>: movaps %xmm0, 0x50(%esp)
0x87bed4ac <+92>: movdqa -0x169984(%eax), %xmm0
0x87bed4b4 <+100>: movdqa %xmm0, 0x60(%esp)
0x87bed4ba <+106>: movdqa -0x169974(%eax), %xmm2
0x87bed4c2 <+114>: movdqa -0x169964(%eax), %xmm3
```
Have we missed something obvious regarding the configure options?
Thanks!",True,"Crash on Android x86 in SIMD instruction - Hi,
We're having some issues using libsodium 1.0.16 on Android, we're cross-compiling it for multiple architectures but we're having issues with x86.
We tried to hack the configure/Makefile to not use SIMD (and using --disable-asm), but couldn't quite get it to work.
Note that everything works fine for us on x86-64.
Our target x86 CPU (which is the Android emulator) doesn't support much in the way of SIMD instructions, so it trips and dies on the first one it sees.
Here's a backtrace and disassembly of the crash site:
```
* thread #31, name = 'io.myapp', stop reason = signal SIGSEGV: invalid address (fault address: 0x0)
* frame #0: 0x87bed4a7 libmyapp.so`SHA512_Transform [inlined] be64dec_vect(len=128) at hash_sha512_cp.c:0
frame #1: 0x87bed46a libmyapp.so`SHA512_Transform(state=0x88642800, block=""\x02M�;'��F�)\x90\xbaة\bJRE�b\x9c\x80\x94#G\x92�l\x1a\x89\xad܀"", W=0x88642510, S=) at hash_sha512_cp.c:119
frame #2: 0x87bef0bb libmyapp.so`crypto_hash_sha512_final [inlined] SHA512_Pad(state=) at hash_sha512_cp.c:191
frame #3: 0x87beee54 libmyapp.so`crypto_hash_sha512_final(state=0x88642800, out=""H2d\x88\xb6㪇t-d\x88"") at hash_sha512_cp.c:263
frame #4: 0x87bef34b libmyapp.so`crypto_hash_sha512(out=, in=, inlen=) at hash_sha512_cp.c:279
frame #5: 0x87bf0207 libmyapp.so`crypto_sign_ed25519_keypair [inlined] crypto_sign_ed25519_seed_keypair(pk=""����H2d\x88H2d\x88H2d\x88 -d\x88�6\xaa\x87H2d\x88H2d\x88H2d\x88\xb6㪇t-d\x88"", sk=""H2d\x88\xb6㪇t-d\x88"", seed=) at keypair.c:21
frame #6: 0x87bf01fc libmyapp.so`crypto_sign_ed25519_keypair(pk=, sk=) at keypair.c:43
frame #7: 0x87befcf0 libmyapp.so`crypto_sign_keypair(pk=, sk=) at crypto_sign.c:56
```
```
0x87bed494 <+68>: cmpl $0x10, %ebx
0x87bed497 <+71>: jne 0x87bed480 ; <+48> [inlined] load64_be + 2 at hash_sha512_cp.c:56
0x87bed499 <+73>: jmp 0x87bed58d ; <+317> [inlined] memcpy(void*, void const* pass_object_size0, unsigned int) at hash_sha512_cp.c:120
0x87bed49e <+78>: xorl %esi, %esi
0x87bed4a0 <+80>: movaps -0x169994(%eax), %xmm0
-> 0x87bed4a7 <+87>: movaps %xmm0, 0x50(%esp)
0x87bed4ac <+92>: movdqa -0x169984(%eax), %xmm0
0x87bed4b4 <+100>: movdqa %xmm0, 0x60(%esp)
0x87bed4ba <+106>: movdqa -0x169974(%eax), %xmm2
0x87bed4c2 <+114>: movdqa -0x169964(%eax), %xmm3
```
Have we missed something obvious regarding the configure options?
Thanks!",1,crash on android in simd instruction hi we re having some issues using libsodium on android we re cross compiling it for multiple architectures but we re having issues with we tried to hack the configure makefile to not use simd and using disable asm but couldn t quite get it to work note that everything works fine for us on our target cpu which is the android emulator doesn t support much in the way of simd instructions so it trips and dies on the first one it sees here s a backtrace and disassembly of the crash site thread name io myapp stop reason signal sigsegv invalid address fault address frame libmyapp so transform vect len at hash cp c frame libmyapp so transform state block � ��f� xbaة bjre�b g �l xad܀ w s at hash cp c frame libmyapp so crypto hash final pad state at hash cp c frame libmyapp so crypto hash final state out d at hash cp c frame libmyapp so crypto hash out in inlen at hash cp c frame libmyapp so crypto sign keypair crypto sign seed keypair pk ���� d � xaa d sk d seed at keypair c frame libmyapp so crypto sign keypair pk sk at keypair c frame libmyapp so crypto sign keypair pk sk at crypto sign c cmpl ebx jne be at hash cp c jmp memcpy void void const pass object unsigned int at hash cp c xorl esi esi movaps eax movaps esp movdqa eax movdqa esp movdqa eax movdqa eax have we missed something obvious regarding the configure options thanks ,1
299182,9205198190.0,IssuesEvent,2019-03-08 09:54:16,qissue-bot/QGIS,https://api.github.com/repos/qissue-bot/QGIS,closed,SPIT: implement rollback when import of shapes into PGIS get canceled,Component: Easy fix? Component: Pull Request or Patch supplied Component: Resolution Priority: Low Project: QGIS Application Status: Closed Tracker: Feature request,"---
Author Name: **stephan-holl-intevation-de -** (stephan-holl-intevation-de -)
Original Redmine Issue: 593, https://issues.qgis.org/issues/593
Original Assignee: nobody -
---
SPIT: once starting the import of a shapefile the 'cancel'-button
does not work anymore. It would be nice to cover the import into a
transaction block that 'cancel' could be easily rolled back.
",1.0,"SPIT: implement rollback when import of shapes into PGIS get canceled - ---
Author Name: **stephan-holl-intevation-de -** (stephan-holl-intevation-de -)
Original Redmine Issue: 593, https://issues.qgis.org/issues/593
Original Assignee: nobody -
---
SPIT: once starting the import of a shapefile the 'cancel'-button
does not work anymore. It would be nice to cover the import into a
transaction block that 'cancel' could be easily rolled back.
",0,spit implement rollback when import of shapes into pgis get canceled author name stephan holl intevation de stephan holl intevation de original redmine issue original assignee nobody spit once starting the import of a shapefile the cancel button does not work anymore it would be nice to cover the import into a transaction block that cancel could be easily rolled back ,0
18013,2615161114.0,IssuesEvent,2015-03-01 06:39:50,chrsmith/html5rocks,https://api.github.com/repos/chrsmith/html5rocks,closed,make it work on tablets,auto-migrated Priority-P3 Slides Type-Bug,"```
Please describe the issue:
Hi I'm testing slides-html5rocks.com on the samsung galaxy tab 10.1 (google
io).
Keyboard support is hard..going to pages is difficult.
Please provide any additional information below.
Also tested on the iPad2. Some crashes. But I guess that it is meant for Chrome.
```
Original issue reported on code.google.com by `rjankie` on 23 Jun 2011 at 5:20",1.0,"make it work on tablets - ```
Please describe the issue:
Hi I'm testing slides-html5rocks.com on the samsung galaxy tab 10.1 (google
io).
Keyboard support is hard..going to pages is difficult.
Please provide any additional information below.
Also tested on the iPad2. Some crashes. But I guess that it is meant for Chrome.
```
Original issue reported on code.google.com by `rjankie` on 23 Jun 2011 at 5:20",0,make it work on tablets please describe the issue hi i m testing slides com on the samsung galaxy tab google io keyboard support is hard going to pages is difficult please provide any additional information below also tested on the some crashes but i guess that it is meant for chrome original issue reported on code google com by rjankie on jun at ,0
1295,17408910259.0,IssuesEvent,2021-08-03 09:42:55,elastic/cloud-on-k8s,https://api.github.com/repos/elastic/cloud-on-k8s,closed,eckdump.sh also dumps secret content through metadata,>bug supportability,"We inadvertently dump the contents of (all) secrets in the the `eckdump.sh` script. While the intent behind the `get_metadata` function was probably to avoid exactly that it is not effective. This is because the `last-applied-configuration` metadata also contains the secret data.
We should probably be a bit smarter about that by either:
1. not dump secrets at all
2. be more selective with secrets: e.g. what we really want is the keys but not the values as this gives us clues whether the expected data is present, also avoid dumping the `last-applied-configuration` annotation.
",True,"eckdump.sh also dumps secret content through metadata - We inadvertently dump the contents of (all) secrets in the the `eckdump.sh` script. While the intent behind the `get_metadata` function was probably to avoid exactly that it is not effective. This is because the `last-applied-configuration` metadata also contains the secret data.
We should probably be a bit smarter about that by either:
1. not dump secrets at all
2. be more selective with secrets: e.g. what we really want is the keys but not the values as this gives us clues whether the expected data is present, also avoid dumping the `last-applied-configuration` annotation.
",1,eckdump sh also dumps secret content through metadata we inadvertently dump the contents of all secrets in the the eckdump sh script while the intent behind the get metadata function was probably to avoid exactly that it is not effective this is because the last applied configuration metadata also contains the secret data we should probably be a bit smarter about that by either not dump secrets at all be more selective with secrets e g what we really want is the keys but not the values as this gives us clues whether the expected data is present also avoid dumping the last applied configuration annotation ,1
205507,15978592457.0,IssuesEvent,2021-04-17 10:35:28,bounswe/2021SpringGroup3,https://api.github.com/repos/bounswe/2021SpringGroup3,opened,Scenario: Search & Filter,Priority: High Status: Available Type: Documentation,Create a scenario of a user searching for a post using filtering mechanisms.,1.0,Scenario: Search & Filter - Create a scenario of a user searching for a post using filtering mechanisms.,0,scenario search filter create a scenario of a user searching for a post using filtering mechanisms ,0
48900,13425013948.0,IssuesEvent,2020-09-06 08:19:54,searchboy-sudo/headless-wp-nuxt,https://api.github.com/repos/searchboy-sudo/headless-wp-nuxt,opened,"CVE-2019-6284 (Medium) detected in node-sass-v4.13.1, node-sass-4.13.1.tgz",security vulnerability,"## CVE-2019-6284 - Medium Severity Vulnerability
Vulnerable Libraries - node-sass-4.13.1.tgz
node-sass-4.13.1.tgz
Wrapper around libsass
Library home page: https://registry.npmjs.org/node-sass/-/node-sass-4.13.1.tgz
Path to dependency file: /tmp/ws-scm/headless-wp-nuxt/package.json
Path to vulnerable library: /headless-wp-nuxt/node_modules/node-sass/package.json
Dependency Hierarchy:
- :x: **node-sass-4.13.1.tgz** (Vulnerable Library)
Found in HEAD commit: 748e38948b04db4c74d2e3dae8a217d0ecbc395c
Vulnerability Details
In LibSass 3.5.5, a heap-based buffer over-read exists in Sass::Prelexer::alternatives in prelexer.hpp.
Publish Date: 2019-01-14
URL: CVE-2019-6284
CVSS 3 Score Details (6.5 )
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
For more information on CVSS3 Scores, click here .
Suggested Fix
Type: Upgrade version
Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-6284
Release Date: 2019-08-06
Fix Resolution: LibSass - 3.6.0
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2019-6284 (Medium) detected in node-sass-v4.13.1, node-sass-4.13.1.tgz - ## CVE-2019-6284 - Medium Severity Vulnerability
Vulnerable Libraries - node-sass-4.13.1.tgz
node-sass-4.13.1.tgz
Wrapper around libsass
Library home page: https://registry.npmjs.org/node-sass/-/node-sass-4.13.1.tgz
Path to dependency file: /tmp/ws-scm/headless-wp-nuxt/package.json
Path to vulnerable library: /headless-wp-nuxt/node_modules/node-sass/package.json
Dependency Hierarchy:
- :x: **node-sass-4.13.1.tgz** (Vulnerable Library)
Found in HEAD commit: 748e38948b04db4c74d2e3dae8a217d0ecbc395c
Vulnerability Details
In LibSass 3.5.5, a heap-based buffer over-read exists in Sass::Prelexer::alternatives in prelexer.hpp.
Publish Date: 2019-01-14
URL: CVE-2019-6284
CVSS 3 Score Details (6.5 )
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
For more information on CVSS3 Scores, click here .
Suggested Fix
Type: Upgrade version
Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-6284
Release Date: 2019-08-06
Fix Resolution: LibSass - 3.6.0
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve medium detected in node sass node sass tgz cve medium severity vulnerability vulnerable libraries node sass tgz node sass tgz wrapper around libsass library home page a href path to dependency file tmp ws scm headless wp nuxt package json path to vulnerable library headless wp nuxt node modules node sass package json dependency hierarchy x node sass tgz vulnerable library found in head commit a href vulnerability details in libsass a heap based buffer over read exists in sass prelexer alternatives in prelexer hpp publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution libsass step up your open source security game with whitesource ,0
1533,22157266995.0,IssuesEvent,2022-06-04 01:49:28,apache/beam,https://api.github.com/repos/apache/beam,opened,Python SDK: Collect metrics in non-cython environment,new feature P3 runner-flink sdk-py-harness portability-flink portable-metrics-bugs,"With the portable Flink runner, the metric is reported as 0, while the count metric works as expected.
[https://lists.apache.org/thread.html/25eec8104bda6e4c71cc6c5e9864c335833c3ae2afe225d372479f30@%3Cdev.beam.apache.org%3E](https://lists.apache.org/thread.html/25eec8104bda6e4c71cc6c5e9864c335833c3ae2afe225d372479f30@%3Cdev.beam.apache.org%3E)
Edit: metrics are collected properly when using cython, but not without cython. This is because state sampling has yet to be implemented in a way that does not depend on cython [1].
[1] [https://github.com/apache/beam/blob/master/sdks/python/apache_beam/runners/worker/statesampler_slow.py#L62](https://github.com/apache/beam/blob/master/sdks/python/apache_beam/runners/worker/statesampler_slow.py#L62)
Imported from Jira [BEAM-7058](https://issues.apache.org/jira/browse/BEAM-7058). Original Jira may contain additional context.
Reported by: thw.",True,"Python SDK: Collect metrics in non-cython environment - With the portable Flink runner, the metric is reported as 0, while the count metric works as expected.
[https://lists.apache.org/thread.html/25eec8104bda6e4c71cc6c5e9864c335833c3ae2afe225d372479f30@%3Cdev.beam.apache.org%3E](https://lists.apache.org/thread.html/25eec8104bda6e4c71cc6c5e9864c335833c3ae2afe225d372479f30@%3Cdev.beam.apache.org%3E)
Edit: metrics are collected properly when using cython, but not without cython. This is because state sampling has yet to be implemented in a way that does not depend on cython [1].
[1] [https://github.com/apache/beam/blob/master/sdks/python/apache_beam/runners/worker/statesampler_slow.py#L62](https://github.com/apache/beam/blob/master/sdks/python/apache_beam/runners/worker/statesampler_slow.py#L62)
Imported from Jira [BEAM-7058](https://issues.apache.org/jira/browse/BEAM-7058). Original Jira may contain additional context.
Reported by: thw.",1,python sdk collect metrics in non cython environment with the portable flink runner the metric is reported as while the count metric works as expected edit metrics are collected properly when using cython but not without cython this is because state sampling has yet to be implemented in a way that does not depend on cython imported from jira original jira may contain additional context reported by thw ,1
211873,23851162663.0,IssuesEvent,2022-09-06 18:04:17,xmidt-org/talaria,https://api.github.com/repos/xmidt-org/talaria,closed,CVE-2022-29526 (Medium) detected in github.com/hashicorp/go-sockaddr-v1.0.2 - autoclosed,security vulnerability,"## CVE-2022-29526 - Medium Severity Vulnerability
Vulnerable Library - github.com/hashicorp/go-sockaddr-v1.0.2
IP Address/UNIX Socket convenience functions for Go
Dependency Hierarchy:
- github.com/xmidt-org/webpa-common/v2-v2.0.7-dev.1 (Root Library)
- github.com/hashicorp/consul/api-v1.13.1
- github.com/hashicorp/serf-v0.9.8
- github.com/hashicorp/memberlist-v0.3.0
- :x: **github.com/hashicorp/go-sockaddr-v1.0.2** (Vulnerable Library)
Found in HEAD commit: 5585120e6948118205d4650798b0373c28b1ec78
Found in base branch: main
Vulnerability Details
Go before 1.17.10 and 1.18.x before 1.18.2 has Incorrect Privilege Assignment. When called with a non-zero flags parameter, the Faccessat function could incorrectly report that a file is accessible.
Publish Date: 2022-06-23
URL: CVE-2022-29526
CVSS 3 Score Details (5.3 )
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: None
For more information on CVSS3 Scores, click here .
Suggested Fix
Type: Upgrade version
Origin: https://security-tracker.debian.org/tracker/CVE-2022-29526
Release Date: 2022-06-23
Fix Resolution: go1.17.10,go1.18.2,go1.19
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2022-29526 (Medium) detected in github.com/hashicorp/go-sockaddr-v1.0.2 - autoclosed - ## CVE-2022-29526 - Medium Severity Vulnerability
Vulnerable Library - github.com/hashicorp/go-sockaddr-v1.0.2
IP Address/UNIX Socket convenience functions for Go
Dependency Hierarchy:
- github.com/xmidt-org/webpa-common/v2-v2.0.7-dev.1 (Root Library)
- github.com/hashicorp/consul/api-v1.13.1
- github.com/hashicorp/serf-v0.9.8
- github.com/hashicorp/memberlist-v0.3.0
- :x: **github.com/hashicorp/go-sockaddr-v1.0.2** (Vulnerable Library)
Found in HEAD commit: 5585120e6948118205d4650798b0373c28b1ec78
Found in base branch: main
Vulnerability Details
Go before 1.17.10 and 1.18.x before 1.18.2 has Incorrect Privilege Assignment. When called with a non-zero flags parameter, the Faccessat function could incorrectly report that a file is accessible.
Publish Date: 2022-06-23
URL: CVE-2022-29526
CVSS 3 Score Details (5.3 )
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: None
For more information on CVSS3 Scores, click here .
Suggested Fix
Type: Upgrade version
Origin: https://security-tracker.debian.org/tracker/CVE-2022-29526
Release Date: 2022-06-23
Fix Resolution: go1.17.10,go1.18.2,go1.19
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve medium detected in github com hashicorp go sockaddr autoclosed cve medium severity vulnerability vulnerable library github com hashicorp go sockaddr ip address unix socket convenience functions for go dependency hierarchy github com xmidt org webpa common dev root library github com hashicorp consul api github com hashicorp serf github com hashicorp memberlist x github com hashicorp go sockaddr vulnerable library found in head commit a href found in base branch main vulnerability details go before and x before has incorrect privilege assignment when called with a non zero flags parameter the faccessat function could incorrectly report that a file is accessible publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend ,0
526388,15287601414.0,IssuesEvent,2021-02-23 15:56:34,OpenNebula/one,https://api.github.com/repos/OpenNebula/one,closed,Marketplace images SIZE under 1MB are rounded to 0MB,Category: MarketPlace Community Priority: Normal Status: Accepted Type: Bug,"**Description**

**To Reproduce**
https://marketplace.opennebula.io/appliance/6f7a1735-5b88-4667-a319-07ffe5e684ee
Check the size of the ""Contextualization Packages"" in OpenNebula
The size is 0
**Expected behaviour**
The size of the image is 720896 bytes so should be rounded up to 1MB
**Details**
- Affected Component: [Marketplace]
- Hypervisor: [KVM]
- Version: [5.12.x]
**Additional context**
Also, the size of 0 is passed to the storage drivers...
## Progress Status
- [ ] Branch created
- [ ] Code committed to development branch
- [ ] Testing - QA
- [ ] Documentation
- [ ] Release notes - resolved issues, compatibility, known issues
- [ ] Code committed to upstream release/hotfix branches
- [ ] Documentation committed to upstream release/hotfix branches
",1.0,"Marketplace images SIZE under 1MB are rounded to 0MB - **Description**

**To Reproduce**
https://marketplace.opennebula.io/appliance/6f7a1735-5b88-4667-a319-07ffe5e684ee
Check the size of the ""Contextualization Packages"" in OpenNebula
The size is 0
**Expected behaviour**
The size of the image is 720896 bytes so should be rounded up to 1MB
**Details**
- Affected Component: [Marketplace]
- Hypervisor: [KVM]
- Version: [5.12.x]
**Additional context**
Also, the size of 0 is passed to the storage drivers...
## Progress Status
- [ ] Branch created
- [ ] Code committed to development branch
- [ ] Testing - QA
- [ ] Documentation
- [ ] Release notes - resolved issues, compatibility, known issues
- [ ] Code committed to upstream release/hotfix branches
- [ ] Documentation committed to upstream release/hotfix branches
",0,marketplace images size under are rounded to description to reproduce check the size of the contextualization packages in opennebula the size is expected behaviour the size of the image is bytes so should be rounded up to details affected component hypervisor version additional context also the size of is passed to the storage drivers progress status branch created code committed to development branch testing qa documentation release notes resolved issues compatibility known issues code committed to upstream release hotfix branches documentation committed to upstream release hotfix branches ,0
1416,20996248346.0,IssuesEvent,2022-03-29 13:44:08,HDFGroup/hermes,https://api.github.com/repos/HDFGroup/hermes,closed,O_TMPFILE isn't supported on older kernels and in certain filesystems,portability adapter-posix,The posix adapter makes references to the `O_TMPFILE` option of `open`. It should work even if that option is not present.,True,O_TMPFILE isn't supported on older kernels and in certain filesystems - The posix adapter makes references to the `O_TMPFILE` option of `open`. It should work even if that option is not present.,1,o tmpfile isn t supported on older kernels and in certain filesystems the posix adapter makes references to the o tmpfile option of open it should work even if that option is not present ,1
262,5081247610.0,IssuesEvent,2016-12-29 09:19:45,OpenSMTPD/OpenSMTPD,https://api.github.com/repos/OpenSMTPD/OpenSMTPD,closed,Offline scan/enqueue fails due to closed fd,bug portable regression,"Hello, I noticed offline submission isn't working under Arch Linux. Currently Arch isn't setting permissions on smtpctl correctly (opening a bug about that), but even with the right permissions it seems smtpd is sabotaging itself. More specifically, smtpd closes all its FDs then tries to fdopen one:
https://github.com/OpenSMTPD/OpenSMTPD/blob/opensmtpd-6.0.2p1/smtpd/smtpd.c#L1528-L1530
It looks like the closefrom() call only made it into portable which is why the change to fdopen works there and not here.",True,"Offline scan/enqueue fails due to closed fd - Hello, I noticed offline submission isn't working under Arch Linux. Currently Arch isn't setting permissions on smtpctl correctly (opening a bug about that), but even with the right permissions it seems smtpd is sabotaging itself. More specifically, smtpd closes all its FDs then tries to fdopen one:
https://github.com/OpenSMTPD/OpenSMTPD/blob/opensmtpd-6.0.2p1/smtpd/smtpd.c#L1528-L1530
It looks like the closefrom() call only made it into portable which is why the change to fdopen works there and not here.",1,offline scan enqueue fails due to closed fd hello i noticed offline submission isn t working under arch linux currently arch isn t setting permissions on smtpctl correctly opening a bug about that but even with the right permissions it seems smtpd is sabotaging itself more specifically smtpd closes all its fds then tries to fdopen one it looks like the closefrom call only made it into portable which is why the change to fdopen works there and not here ,1
721073,24817057246.0,IssuesEvent,2022-10-25 13:52:28,salesagility/SuiteCRM-Core,https://api.github.com/repos/salesagility/SuiteCRM-Core,closed,"""Bad data passed in;"" When trying to forward or reply to emails",Type:Bug Priority:Important Area: Emails,"#### Issue
Fresh installation of SuiteCRM 8.0.1 and everything else seems to be working correctly. However after I have imported an Email, whenever I open up the actual Email record and hit Reply, Reply to All or Forward, I get the “Bad data passed in; Return to Home” error.
When I go composing a new email or when I hit Reply from the Email Dashlet without actually opening up the Email record, it works without issues.
#### Expected Behavior
The modal window with e-mail composition should open when I try to reply or forward an imported e-mail.
#### Actual Behavior
A ""Bad data passed in;"" error is displayed.
#### Steps to Reproduce
1. Import an e-mail
2. Open the e-mail
3. Select Actions->Reply
4. Note the error being displayed instead of modal window
#### Context
I am currently unable to effectively work with e-mails inside SuiteCRM.
#### Your Environment
* SuiteCRM Version used: 8.0.1
* Browser name and version (e.g. Chrome Version 51.0.2704.63 (64-bit)): Chrome Version 96.0.4664.110 (Official Build) (64-bit)
* Environment name and version (e.g. MySQL, PHP 7): PHP 7.4
* Operating System and version (e.g Ubuntu 16.04):Debian 11
",1.0,"""Bad data passed in;"" When trying to forward or reply to emails - #### Issue
Fresh installation of SuiteCRM 8.0.1 and everything else seems to be working correctly. However after I have imported an Email, whenever I open up the actual Email record and hit Reply, Reply to All or Forward, I get the “Bad data passed in; Return to Home” error.
When I go composing a new email or when I hit Reply from the Email Dashlet without actually opening up the Email record, it works without issues.
#### Expected Behavior
The modal window with e-mail composition should open when I try to reply or forward an imported e-mail.
#### Actual Behavior
A ""Bad data passed in;"" error is displayed.
#### Steps to Reproduce
1. Import an e-mail
2. Open the e-mail
3. Select Actions->Reply
4. Note the error being displayed instead of modal window
#### Context
I am currently unable to effectively work with e-mails inside SuiteCRM.
#### Your Environment
* SuiteCRM Version used: 8.0.1
* Browser name and version (e.g. Chrome Version 51.0.2704.63 (64-bit)): Chrome Version 96.0.4664.110 (Official Build) (64-bit)
* Environment name and version (e.g. MySQL, PHP 7): PHP 7.4
* Operating System and version (e.g Ubuntu 16.04):Debian 11
",0, bad data passed in when trying to forward or reply to emails issue fresh installation of suitecrm and everything else seems to be working correctly however after i have imported an email whenever i open up the actual email record and hit reply reply to all or forward i get the “bad data passed in return to home” error when i go composing a new email or when i hit reply from the email dashlet without actually opening up the email record it works without issues expected behavior the modal window with e mail composition should open when i try to reply or forward an imported e mail actual behavior a bad data passed in error is displayed steps to reproduce import an e mail open the e mail select actions reply note the error being displayed instead of modal window context i am currently unable to effectively work with e mails inside suitecrm your environment suitecrm version used browser name and version e g chrome version bit chrome version official build bit environment name and version e g mysql php php operating system and version e g ubuntu debian ,0
611,8245432177.0,IssuesEvent,2018-09-11 09:40:38,nbs-system/snuffleupagus,https://api.github.com/repos/nbs-system/snuffleupagus,opened,Snuffleupagus doesn't compile on Windows,help wanted portability,"We're using `#include ` in Snuffleupagus, but this isn't available on Windows.
We're using it to check the return code of a forked process, in the code to check for file uploads:
```C
if ((pid = fork()) == 0) {
if (execve(ZSTR_VAL(config_upload->script), cmd, env) == -1) {
// […]
exit(1);
}
} else if (pid == -1) {
// […]
}
EFREE_3(env);
int waitstatus;
wait(&waitstatus);
// […]
}
```
I don't know if there is a way to do this on Windows :/",True,"Snuffleupagus doesn't compile on Windows - We're using `#include ` in Snuffleupagus, but this isn't available on Windows.
We're using it to check the return code of a forked process, in the code to check for file uploads:
```C
if ((pid = fork()) == 0) {
if (execve(ZSTR_VAL(config_upload->script), cmd, env) == -1) {
// […]
exit(1);
}
} else if (pid == -1) {
// […]
}
EFREE_3(env);
int waitstatus;
wait(&waitstatus);
// […]
}
```
I don't know if there is a way to do this on Windows :/",1,snuffleupagus doesn t compile on windows we re using include in snuffleupagus but this isn t available on windows we re using it to check the return code of a forked process in the code to check for file uploads c if pid fork if execve zstr val config upload script cmd env exit else if pid efree env int waitstatus wait waitstatus i don t know if there is a way to do this on windows ,1
232893,25706425790.0,IssuesEvent,2022-12-07 01:10:31,benlazarine/atmosphere,https://api.github.com/repos/benlazarine/atmosphere,opened,CVE-2022-24439 (High) detected in GitPython-2.1.5-py2.py3-none-any.whl,security vulnerability,"## CVE-2022-24439 - High Severity Vulnerability
Vulnerable Library - GitPython-2.1.5-py2.py3-none-any.whl
Python Git Library
Library home page: https://files.pythonhosted.org/packages/7e/13/2a556eb97dcf498c915e5e04bb82bf74e07bb8b7337ca2be49bfd9fb6313/GitPython-2.1.5-py2.py3-none-any.whl
Path to dependency file: /dev_requirements.txt
Path to vulnerable library: /dev_requirements.txt
Dependency Hierarchy:
- :x: **GitPython-2.1.5-py2.py3-none-any.whl** (Vulnerable Library)
Vulnerability Details
All versions of package gitpython are vulnerable to Remote Code Execution (RCE) due to improper user input validation, which makes it possible to inject a maliciously crafted remote URL into the clone command. Exploiting this vulnerability is possible because the library makes external calls to git without sufficient sanitization of input arguments.
Publish Date: 2022-12-06
URL: CVE-2022-24439
CVSS 3 Score Details (8.1 )
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
For more information on CVSS3 Scores, click here .
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2022-24439 (High) detected in GitPython-2.1.5-py2.py3-none-any.whl - ## CVE-2022-24439 - High Severity Vulnerability
Vulnerable Library - GitPython-2.1.5-py2.py3-none-any.whl
Python Git Library
Library home page: https://files.pythonhosted.org/packages/7e/13/2a556eb97dcf498c915e5e04bb82bf74e07bb8b7337ca2be49bfd9fb6313/GitPython-2.1.5-py2.py3-none-any.whl
Path to dependency file: /dev_requirements.txt
Path to vulnerable library: /dev_requirements.txt
Dependency Hierarchy:
- :x: **GitPython-2.1.5-py2.py3-none-any.whl** (Vulnerable Library)
Vulnerability Details
All versions of package gitpython are vulnerable to Remote Code Execution (RCE) due to improper user input validation, which makes it possible to inject a maliciously crafted remote URL into the clone command. Exploiting this vulnerability is possible because the library makes external calls to git without sufficient sanitization of input arguments.
Publish Date: 2022-12-06
URL: CVE-2022-24439
CVSS 3 Score Details (8.1 )
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
For more information on CVSS3 Scores, click here .
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve high detected in gitpython none any whl cve high severity vulnerability vulnerable library gitpython none any whl python git library library home page a href path to dependency file dev requirements txt path to vulnerable library dev requirements txt dependency hierarchy x gitpython none any whl vulnerable library vulnerability details all versions of package gitpython are vulnerable to remote code execution rce due to improper user input validation which makes it possible to inject a maliciously crafted remote url into the clone command exploiting this vulnerability is possible because the library makes external calls to git without sufficient sanitization of input arguments publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href step up your open source security game with mend ,0
190796,6822598977.0,IssuesEvent,2017-11-07 20:39:10,YetiForceCompany/YetiForceCRM,https://api.github.com/repos/YetiForceCompany/YetiForceCRM,closed,OSSPassword: cannot add new record,Category::Bug Subcategory::HighPriority,"
#### Issue
Cannot add a new record into OSSPassword module.
Because: modules/OSSPasswords/actions/Save.php
~~~php
...
public function process(\App\Request $request)
{
$recordModel = $this->saveRecord($request);
...
public function saveRecord(\App\Request $request)
{
$recordId = $request->getInteger('record');
...
~~~
$request->getInteger('record') will fail, since record is empty (new record)
#### Actual Behavior
fill in https://gitdeveloper.yetiforce.com/index.php?module=OSSPasswords&view=Edit and click save:
~~~
Incorrect request
Incorrect value detected, please contact your administrator
Home page
#0 modules/OSSPasswords/actions/Save.php(35): App\Request->getInteger('record')
#1 modules/OSSPasswords/actions/Save.php(14): OSSPasswords_Save_Action->saveRecord(Object(App\Request))
#2 include/main/WebUI.php(190): OSSPasswords_Save_Action->process(Object(App\Request))
#3 index.php(25): Vtiger_WebUI->process(Object(App\Request))
#4 public_html/index.php(11): require('/home/gitdevelo...')
#5 {main}
#0 [trace] Entering getColumnFields(Users) method ...
#1 [trace] Exiting getColumnFields method ...
#2 [trace] Entering getColumnFields(OSSPasswords) method ...
#3 [trace] Exiting getColumnFields method ...
#4 [error] ERR_NOT_ALLOWED_VALUE||record|| => vendor/yetiforce/Request.php:170
~~~
",1.0,"OSSPassword: cannot add new record -
#### Issue
Cannot add a new record into OSSPassword module.
Because: modules/OSSPasswords/actions/Save.php
~~~php
...
public function process(\App\Request $request)
{
$recordModel = $this->saveRecord($request);
...
public function saveRecord(\App\Request $request)
{
$recordId = $request->getInteger('record');
...
~~~
$request->getInteger('record') will fail, since record is empty (new record)
#### Actual Behavior
fill in https://gitdeveloper.yetiforce.com/index.php?module=OSSPasswords&view=Edit and click save:
~~~
Incorrect request
Incorrect value detected, please contact your administrator
Home page
#0 modules/OSSPasswords/actions/Save.php(35): App\Request->getInteger('record')
#1 modules/OSSPasswords/actions/Save.php(14): OSSPasswords_Save_Action->saveRecord(Object(App\Request))
#2 include/main/WebUI.php(190): OSSPasswords_Save_Action->process(Object(App\Request))
#3 index.php(25): Vtiger_WebUI->process(Object(App\Request))
#4 public_html/index.php(11): require('/home/gitdevelo...')
#5 {main}
#0 [trace] Entering getColumnFields(Users) method ...
#1 [trace] Exiting getColumnFields method ...
#2 [trace] Entering getColumnFields(OSSPasswords) method ...
#3 [trace] Exiting getColumnFields method ...
#4 [error] ERR_NOT_ALLOWED_VALUE||record|| => vendor/yetiforce/Request.php:170
~~~
",0,osspassword cannot add new record issue cannot add a new record into osspassword module because modules osspasswords actions save php php public function process app request request recordmodel this saverecord request public function saverecord app request request recordid request getinteger record request getinteger record will fail since record is empty new record actual behavior fill in and click save incorrect request incorrect value detected please contact your administrator home page modules osspasswords actions save php app request getinteger record modules osspasswords actions save php osspasswords save action saverecord object app request include main webui php osspasswords save action process object app request index php vtiger webui process object app request public html index php require home gitdevelo main entering getcolumnfields users method exiting getcolumnfields method entering getcolumnfields osspasswords method exiting getcolumnfields method err not allowed value record vendor yetiforce request php ,0
94,3115207698.0,IssuesEvent,2015-09-03 13:26:43,svaarala/duktape,https://api.github.com/repos/svaarala/duktape,opened,Using inlining control with multiple sources build,enhancement portability,"As part of Duktape 1.3.0 cleanups I needed to disable ""noinline"", ""inline"", and ""always inline"" macros for the multiple sources build because they cause compiler specific trouble as the functions are declared in a header but the definition is in a separate file.
This isn't really portable, see e.g. http://stackoverflow.com/questions/5229343/how-to-declare-an-inline-function-in-c99-multi-file-project.
Try to figure out a portable solution for inline function declarations for both single and multiple source compilation. Now inlining control is only enabled for single source compilation which is a reasonable compromise.
Note that this doesn't mean inlining isn't used, but the compiler gets no hints as to which functions should be forcibly inlined.
",True,"Using inlining control with multiple sources build - As part of Duktape 1.3.0 cleanups I needed to disable ""noinline"", ""inline"", and ""always inline"" macros for the multiple sources build because they cause compiler specific trouble as the functions are declared in a header but the definition is in a separate file.
This isn't really portable, see e.g. http://stackoverflow.com/questions/5229343/how-to-declare-an-inline-function-in-c99-multi-file-project.
Try to figure out a portable solution for inline function declarations for both single and multiple source compilation. Now inlining control is only enabled for single source compilation which is a reasonable compromise.
Note that this doesn't mean inlining isn't used, but the compiler gets no hints as to which functions should be forcibly inlined.
",1,using inlining control with multiple sources build as part of duktape cleanups i needed to disable noinline inline and always inline macros for the multiple sources build because they cause compiler specific trouble as the functions are declared in a header but the definition is in a separate file this isn t really portable see e g try to figure out a portable solution for inline function declarations for both single and multiple source compilation now inlining control is only enabled for single source compilation which is a reasonable compromise note that this doesn t mean inlining isn t used but the compiler gets no hints as to which functions should be forcibly inlined ,1
518023,15022556542.0,IssuesEvent,2021-02-01 17:05:38,wso2/product-apim-tooling,https://api.github.com/repos/wso2/product-apim-tooling,closed,"Restructure the code segments related to ""k8s"" commands",4.0.0 Priority/High Type/Improvement,"**Description:**
With [1], the commands of the API Controller has been revamped and a new scope named ""k8s"" was introduced which can be used with Kubernetes related commands.
The deprecated old commands have been moved to **cmd/deprecated** package and newly revamped files are created in the **cmd** package. Code duplication is there when considering the k8s related commands in both of these packages. ( **cmd/deprecated** and **cmd**).
Those code segments should be restructured and better if the **impl** package can be used to have the common code related to both the above packages.
Note:- Please refer other API Controller commands related code segments to get an insight about the procedure of the restructuring.
**Suggested Labels:**
Type/Improvement
**Affected Product Version:**
APICTL 4.x
[1] https://github.com/wso2/product-apim-tooling/pull/518",1.0,"Restructure the code segments related to ""k8s"" commands - **Description:**
With [1], the commands of the API Controller has been revamped and a new scope named ""k8s"" was introduced which can be used with Kubernetes related commands.
The deprecated old commands have been moved to **cmd/deprecated** package and newly revamped files are created in the **cmd** package. Code duplication is there when considering the k8s related commands in both of these packages. ( **cmd/deprecated** and **cmd**).
Those code segments should be restructured and better if the **impl** package can be used to have the common code related to both the above packages.
Note:- Please refer other API Controller commands related code segments to get an insight about the procedure of the restructuring.
**Suggested Labels:**
Type/Improvement
**Affected Product Version:**
APICTL 4.x
[1] https://github.com/wso2/product-apim-tooling/pull/518",0,restructure the code segments related to commands description with the commands of the api controller has been revamped and a new scope named was introduced which can be used with kubernetes related commands the deprecated old commands have been moved to cmd deprecated package and newly revamped files are created in the cmd package code duplication is there when considering the related commands in both of these packages cmd deprecated and cmd those code segments should be restructured and better if the impl package can be used to have the common code related to both the above packages note please refer other api controller commands related code segments to get an insight about the procedure of the restructuring suggested labels type improvement affected product version apictl x ,0
20,2622138050.0,IssuesEvent,2015-03-04 00:00:47,funcoeszz/funcoeszz,https://api.github.com/repos/funcoeszz/funcoeszz,closed,zzfutebol quebrada no BSD (awk),portabilidade quebrada,"Aqui no meu Mac (que usa o awk do BSD, que é diferente daquele do Linux), a função não retorna nada:
```console
$ zzfutebol
$
```
* O lynx baixou os dados do site, tudo certo.
* A variável `$listajogos` está vazia, então o awk não ""grepou"" nada.
Testei aqui e meu awk suporta `\t`, `+` e `[:alpha:]` nas regex, então não seria este o problema. Também não usa o ou `|` nas regex, que poderia ser outra fonte de problema no BSD.
Como não manjo nada de awk, não tenho mais ideias. @itamarnet? @faustovaz?
Independente deste problema, uma sugestão que dou pra simplificar o código, é apagar todos os tabs (ou trocá-los por espaços) antes do awk. Assim você não precisa se preocupar com eles nas regex.
```bash
$ZZWWWDUMP $url | tr -d '\t' | awk '{
# ...
}'
```
",True,"zzfutebol quebrada no BSD (awk) - Aqui no meu Mac (que usa o awk do BSD, que é diferente daquele do Linux), a função não retorna nada:
```console
$ zzfutebol
$
```
* O lynx baixou os dados do site, tudo certo.
* A variável `$listajogos` está vazia, então o awk não ""grepou"" nada.
Testei aqui e meu awk suporta `\t`, `+` e `[:alpha:]` nas regex, então não seria este o problema. Também não usa o ou `|` nas regex, que poderia ser outra fonte de problema no BSD.
Como não manjo nada de awk, não tenho mais ideias. @itamarnet? @faustovaz?
Independente deste problema, uma sugestão que dou pra simplificar o código, é apagar todos os tabs (ou trocá-los por espaços) antes do awk. Assim você não precisa se preocupar com eles nas regex.
```bash
$ZZWWWDUMP $url | tr -d '\t' | awk '{
# ...
}'
```
",1,zzfutebol quebrada no bsd awk aqui no meu mac que usa o awk do bsd que é diferente daquele do linux a função não retorna nada console zzfutebol o lynx baixou os dados do site tudo certo a variável listajogos está vazia então o awk não grepou nada testei aqui e meu awk suporta t e nas regex então não seria este o problema também não usa o ou nas regex que poderia ser outra fonte de problema no bsd como não manjo nada de awk não tenho mais ideias itamarnet faustovaz independente deste problema uma sugestão que dou pra simplificar o código é apagar todos os tabs ou trocá los por espaços antes do awk assim você não precisa se preocupar com eles nas regex bash zzwwwdump url tr d t awk ,1
807,10526478308.0,IssuesEvent,2019-09-30 17:11:04,Azure/azure-functions-host,https://api.github.com/repos/Azure/azure-functions-host,closed,Add RuntimeSiteName to FunctionsLogs and FunctionsMetrics,P1 Supportability analytics,"For an app with slots, there is a version of the app name that looks like this: ""myapp__f1de"". This is typically referred to as ""RuntimeSiteName"". Some of the other system tables log in this format, but FunctionsLogs and FunctionsMetrics do not. This prevents correlation across the tables.",True,"Add RuntimeSiteName to FunctionsLogs and FunctionsMetrics - For an app with slots, there is a version of the app name that looks like this: ""myapp__f1de"". This is typically referred to as ""RuntimeSiteName"". Some of the other system tables log in this format, but FunctionsLogs and FunctionsMetrics do not. This prevents correlation across the tables.",1,add runtimesitename to functionslogs and functionsmetrics for an app with slots there is a version of the app name that looks like this myapp this is typically referred to as runtimesitename some of the other system tables log in this format but functionslogs and functionsmetrics do not this prevents correlation across the tables ,1
741132,25780758810.0,IssuesEvent,2022-12-09 15:44:03,blindnet-io/privacy-components-web,https://api.github.com/repos/blindnet-io/privacy-components-web,closed,Implement PRIV Types from openAPI Schema,effort2: medium (days) priority: 4 (useful) scope: privacy portal,"Automatically generate typescript types from our open API schema.
- All models should be stored in the @blindnet/core package",1.0,"Implement PRIV Types from openAPI Schema - Automatically generate typescript types from our open API schema.
- All models should be stored in the @blindnet/core package",0,implement priv types from openapi schema automatically generate typescript types from our open api schema all models should be stored in the blindnet core package,0
601,8099597164.0,IssuesEvent,2018-08-11 10:52:44,dpteam/GLQuake3D,https://api.github.com/repos/dpteam/GLQuake3D,reopened,Convert project to CMake,portability,"CMake makes it possible to easily create project for building on x64, as well as Makefiles or any other build systems for other platforms.
Requires:
- [x] #2 Get rid of ASM and only use C",True,"Convert project to CMake - CMake makes it possible to easily create project for building on x64, as well as Makefiles or any other build systems for other platforms.
Requires:
- [x] #2 Get rid of ASM and only use C",1,convert project to cmake cmake makes it possible to easily create project for building on as well as makefiles or any other build systems for other platforms requires get rid of asm and only use c,1
448295,12947262285.0,IssuesEvent,2020-07-18 22:32:21,flickchicks/flick-backend,https://api.github.com/repos/flickchicks/flick-backend,closed,Update search view to include `tag=` GET query param,good first issue high priority,"* we already have a `query=` GET query param that is just the search query of a show name or username
* we want to add another option of `tag=` for the client to search by `tag_id`",1.0,"Update search view to include `tag=` GET query param - * we already have a `query=` GET query param that is just the search query of a show name or username
* we want to add another option of `tag=` for the client to search by `tag_id`",0,update search view to include tag get query param we already have a query get query param that is just the search query of a show name or username we want to add another option of tag for the client to search by tag id ,0
1963,30671332300.0,IssuesEvent,2023-07-25 22:53:50,golang/vulndb,https://api.github.com/repos/golang/vulndb,closed,x/vulndb: potential Go vuln in github.com/containers/podman/v4: GHSA-rh5f-2w6r-q7vj,excluded: NOT_IMPORTABLE,"In GitHub Security Advisory [GHSA-rh5f-2w6r-q7vj](https://github.com/advisories/GHSA-rh5f-2w6r-q7vj), there is a vulnerability in the following Go packages or modules:
| Unit | Fixed | Vulnerable Ranges |
| - | - | - |
| [github.com/containers/podman/v4](https://pkg.go.dev/github.com/containers/podman/v4) | 1.4.0 | < 1.4.0 |
Cross references:
- Module github.com/containers/podman/v4 appears in issue #1159
- Module github.com/containers/podman/v4 appears in issue #1681
See [doc/triage.md](https://github.com/golang/vulndb/blob/master/doc/triage.md) for instructions on how to triage this report.
```
modules:
- module: github.com/containers/podman/v4
versions:
- fixed: 1.4.0
packages:
- package: github.com/containers/podman/v4
summary: Podman Path Traversal Vulnerability leads to arbitrary file read/write
description: |-
A path traversal vulnerability has been discovered in podman before version
1.4.0 in the way it handles symlinks inside containers. An attacker who has
compromised an existing container can cause arbitrary files on the host
filesystem to be read/written when an administrator tries to copy a file from/to
the container.
cves:
- CVE-2019-10152
ghsas:
- GHSA-rh5f-2w6r-q7vj
references:
- web: https://nvd.nist.gov/vuln/detail/CVE-2019-10152
- report: https://github.com/containers/libpod/issues/3211
- fix: https://github.com/containers/libpod/pull/3214
- web: https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2019-10152
- web: https://github.com/containers/libpod/blob/master/RELEASE_NOTES.md#140
- web: http://lists.opensuse.org/opensuse-security-announce/2019-09/msg00001.html
- advisory: https://github.com/advisories/GHSA-rh5f-2w6r-q7vj
```",True,"x/vulndb: potential Go vuln in github.com/containers/podman/v4: GHSA-rh5f-2w6r-q7vj - In GitHub Security Advisory [GHSA-rh5f-2w6r-q7vj](https://github.com/advisories/GHSA-rh5f-2w6r-q7vj), there is a vulnerability in the following Go packages or modules:
| Unit | Fixed | Vulnerable Ranges |
| - | - | - |
| [github.com/containers/podman/v4](https://pkg.go.dev/github.com/containers/podman/v4) | 1.4.0 | < 1.4.0 |
Cross references:
- Module github.com/containers/podman/v4 appears in issue #1159
- Module github.com/containers/podman/v4 appears in issue #1681
See [doc/triage.md](https://github.com/golang/vulndb/blob/master/doc/triage.md) for instructions on how to triage this report.
```
modules:
- module: github.com/containers/podman/v4
versions:
- fixed: 1.4.0
packages:
- package: github.com/containers/podman/v4
summary: Podman Path Traversal Vulnerability leads to arbitrary file read/write
description: |-
A path traversal vulnerability has been discovered in podman before version
1.4.0 in the way it handles symlinks inside containers. An attacker who has
compromised an existing container can cause arbitrary files on the host
filesystem to be read/written when an administrator tries to copy a file from/to
the container.
cves:
- CVE-2019-10152
ghsas:
- GHSA-rh5f-2w6r-q7vj
references:
- web: https://nvd.nist.gov/vuln/detail/CVE-2019-10152
- report: https://github.com/containers/libpod/issues/3211
- fix: https://github.com/containers/libpod/pull/3214
- web: https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2019-10152
- web: https://github.com/containers/libpod/blob/master/RELEASE_NOTES.md#140
- web: http://lists.opensuse.org/opensuse-security-announce/2019-09/msg00001.html
- advisory: https://github.com/advisories/GHSA-rh5f-2w6r-q7vj
```",1,x vulndb potential go vuln in github com containers podman ghsa in github security advisory there is a vulnerability in the following go packages or modules unit fixed vulnerable ranges cross references module github com containers podman appears in issue module github com containers podman appears in issue see for instructions on how to triage this report modules module github com containers podman versions fixed packages package github com containers podman summary podman path traversal vulnerability leads to arbitrary file read write description a path traversal vulnerability has been discovered in podman before version in the way it handles symlinks inside containers an attacker who has compromised an existing container can cause arbitrary files on the host filesystem to be read written when an administrator tries to copy a file from to the container cves cve ghsas ghsa references web report fix web web web advisory ,1
1801,26549577944.0,IssuesEvent,2023-01-20 05:55:14,alcionai/corso,https://api.github.com/repos/alcionai/corso,closed,Graph error not recorded when fetching exchange items,bug supportability,"When populating exchange collections, any items that have errors from graph API do not record the error",True,"Graph error not recorded when fetching exchange items - When populating exchange collections, any items that have errors from graph API do not record the error",1,graph error not recorded when fetching exchange items when populating exchange collections any items that have errors from graph api do not record the error,1
562294,16656277075.0,IssuesEvent,2021-06-05 15:31:44,panix-os/Panix,https://api.github.com/repos/panix-os/Panix,closed,Move Panix Build Container to Appropriate Repo,enhancement high-priority,Also means `.scuba.yml` will need to be updated to reference the new location and version.,1.0,Move Panix Build Container to Appropriate Repo - Also means `.scuba.yml` will need to be updated to reference the new location and version.,0,move panix build container to appropriate repo also means scuba yml will need to be updated to reference the new location and version ,0
230536,17620507811.0,IssuesEvent,2021-08-18 14:47:58,Angelinaaaaaaa/Lentes,https://api.github.com/repos/Angelinaaaaaaa/Lentes,opened,Avaliação da Proposta de Trabalho,bug documentation enhancement,"EQUIPE
Ok.
PROBLEMA
Ok. Identificar necessidade de lentes de contato e recomendar um tipo de lente.
DATASET
Ok. É o dataset Lenses da UCI: https://archive.ics.uci.edu/ml/datasets/Lenses
TÉCNICA
PARCIALMENTE CORRETO. Problemas:
• Conforme enunciado a equipe já deveria descrever como o problema será modelado para aplicação da técnica. Ou seja, quais são as variáveis consideradas pela árvore de decisão? Quais são os valores possíveis de cada variável? Qual é a saída da árvore de decisão e possíveis valores? Como será encontrada a árvore de decisão adequada? Qual estratégia de validação cruzada pretende utilizar para determinar a melhor árvore de decisão? Qual métrica será utilizada para medir o desempenho destas árvores?
OBSERVAÇÕES
A entrega da proposta do trabalho foi realizada em 14/08 através do compartilhamento do projeto no Github, portanto com 2 semanas de atraso. Conforme enunciado, entregas em atraso estarão sujeitas a desconto na nota final. Neste caso, será aplicado um desconto de 20% na nota final (este desconto é menor do que a pontuação indicada no enunciado).
Quando for realizada a avaliação do trabalho completo, será verificado se a equipe corrigiu os problemas acima descritos. Se desejar, a equipe pode comparecer em alguma aula síncrona para esclarecimentos, ou então agendar horário extra-classe com o professor.
",1.0,"Avaliação da Proposta de Trabalho - EQUIPE
Ok.
PROBLEMA
Ok. Identificar necessidade de lentes de contato e recomendar um tipo de lente.
DATASET
Ok. É o dataset Lenses da UCI: https://archive.ics.uci.edu/ml/datasets/Lenses
TÉCNICA
PARCIALMENTE CORRETO. Problemas:
• Conforme enunciado a equipe já deveria descrever como o problema será modelado para aplicação da técnica. Ou seja, quais são as variáveis consideradas pela árvore de decisão? Quais são os valores possíveis de cada variável? Qual é a saída da árvore de decisão e possíveis valores? Como será encontrada a árvore de decisão adequada? Qual estratégia de validação cruzada pretende utilizar para determinar a melhor árvore de decisão? Qual métrica será utilizada para medir o desempenho destas árvores?
OBSERVAÇÕES
A entrega da proposta do trabalho foi realizada em 14/08 através do compartilhamento do projeto no Github, portanto com 2 semanas de atraso. Conforme enunciado, entregas em atraso estarão sujeitas a desconto na nota final. Neste caso, será aplicado um desconto de 20% na nota final (este desconto é menor do que a pontuação indicada no enunciado).
Quando for realizada a avaliação do trabalho completo, será verificado se a equipe corrigiu os problemas acima descritos. Se desejar, a equipe pode comparecer em alguma aula síncrona para esclarecimentos, ou então agendar horário extra-classe com o professor.
",0,avaliação da proposta de trabalho equipe ok problema ok identificar necessidade de lentes de contato e recomendar um tipo de lente dataset ok é o dataset lenses da uci técnica parcialmente correto problemas • conforme enunciado a equipe já deveria descrever como o problema será modelado para aplicação da técnica ou seja quais são as variáveis consideradas pela árvore de decisão quais são os valores possíveis de cada variável qual é a saída da árvore de decisão e possíveis valores como será encontrada a árvore de decisão adequada qual estratégia de validação cruzada pretende utilizar para determinar a melhor árvore de decisão qual métrica será utilizada para medir o desempenho destas árvores observações a entrega da proposta do trabalho foi realizada em através do compartilhamento do projeto no github portanto com semanas de atraso conforme enunciado entregas em atraso estarão sujeitas a desconto na nota final neste caso será aplicado um desconto de na nota final este desconto é menor do que a pontuação indicada no enunciado quando for realizada a avaliação do trabalho completo será verificado se a equipe corrigiu os problemas acima descritos se desejar a equipe pode comparecer em alguma aula síncrona para esclarecimentos ou então agendar horário extra classe com o professor ,0
1233,16452110358.0,IssuesEvent,2021-05-21 07:29:22,openwall/lkrg,https://api.github.com/repos/openwall/lkrg,closed,Error build on kernel 5.4.120,portability,"```
# KERNELRELEASE=5.4.120 make
make -C /lib/modules/5.4.120/build M=/usr/src/lkrg-0.9.1 modules
make[1]: Entering directory '/usr/src/linux-5.4.120'
Building modules, stage 2.
MODPOST 1 modules
ERROR: ""__module_text_address"" [/usr/src/lkrg-0.9.1/p_lkrg.ko] undefined!
ERROR: ""__module_address"" [/usr/src/lkrg-0.9.1/p_lkrg.ko] undefined!
scripts/Makefile.modpost:93: recipe for target '__modpost' failed
make[2]: *** [__modpost] Error 1
Makefile:1647: recipe for target 'modules' failed
make[1]: *** [modules] Error 2
make[1]: Leaving directory '/usr/src/linux-5.4.120'
Makefile:97: recipe for target 'all' failed
make: *** [all] Error 2
```
After commit on Linux Kernel ebb32e28691e27d13584105306ffea6fca1b6284
https://lwn.net/Articles/326026/
```
commit ebb32e28691e27d13584105306ffea6fca1b6284
Author: Rusty Russell
Date: Sat Mar 28 23:12:51 2009 -0600
module: __module_address
Impact: New API, cleanup
ksplice wants to know the bounds of a module, not just the module text.
It makes sense to have __module_address. We then implement
is_module_address and __module_text_address in terms of this (and
change is_module_text_address() to bool while we're at it).
Also, add proper kerneldoc for them all.
Cc: Anders Kaseorg
Cc: Jeff Arnold
Cc: Tim Abbott
Signed-off-by: Rusty Russell
include/linux/module.h | 20 +++++++++---
kernel/module.c | 76 ++++++++++++++++++++++++++++++++++++-----------
2 files changed, 73 insertions(+), 23 deletions(-)
```",True,"Error build on kernel 5.4.120 - ```
# KERNELRELEASE=5.4.120 make
make -C /lib/modules/5.4.120/build M=/usr/src/lkrg-0.9.1 modules
make[1]: Entering directory '/usr/src/linux-5.4.120'
Building modules, stage 2.
MODPOST 1 modules
ERROR: ""__module_text_address"" [/usr/src/lkrg-0.9.1/p_lkrg.ko] undefined!
ERROR: ""__module_address"" [/usr/src/lkrg-0.9.1/p_lkrg.ko] undefined!
scripts/Makefile.modpost:93: recipe for target '__modpost' failed
make[2]: *** [__modpost] Error 1
Makefile:1647: recipe for target 'modules' failed
make[1]: *** [modules] Error 2
make[1]: Leaving directory '/usr/src/linux-5.4.120'
Makefile:97: recipe for target 'all' failed
make: *** [all] Error 2
```
After commit on Linux Kernel ebb32e28691e27d13584105306ffea6fca1b6284
https://lwn.net/Articles/326026/
```
commit ebb32e28691e27d13584105306ffea6fca1b6284
Author: Rusty Russell
Date: Sat Mar 28 23:12:51 2009 -0600
module: __module_address
Impact: New API, cleanup
ksplice wants to know the bounds of a module, not just the module text.
It makes sense to have __module_address. We then implement
is_module_address and __module_text_address in terms of this (and
change is_module_text_address() to bool while we're at it).
Also, add proper kerneldoc for them all.
Cc: Anders Kaseorg
Cc: Jeff Arnold
Cc: Tim Abbott
Signed-off-by: Rusty Russell
include/linux/module.h | 20 +++++++++---
kernel/module.c | 76 ++++++++++++++++++++++++++++++++++++-----------
2 files changed, 73 insertions(+), 23 deletions(-)
```",1,error build on kernel kernelrelease make make c lib modules build m usr src lkrg modules make entering directory usr src linux building modules stage modpost modules error module text address undefined error module address undefined scripts makefile modpost recipe for target modpost failed make error makefile recipe for target modules failed make error make leaving directory usr src linux makefile recipe for target all failed make error after commit on linux kernel commit author rusty russell date sat mar module module address impact new api cleanup ksplice wants to know the bounds of a module not just the module text it makes sense to have module address we then implement is module address and module text address in terms of this and change is module text address to bool while we re at it also add proper kerneldoc for them all cc anders kaseorg cc jeff arnold cc tim abbott signed off by rusty russell include linux module h kernel module c files changed insertions deletions ,1
357743,10617477533.0,IssuesEvent,2019-10-12 19:22:41,ASbeletsky/TimeOffTracker,https://api.github.com/repos/ASbeletsky/TimeOffTracker,closed,Создать структуру проекта,done enhancement high priority,"Создать структуру проекта, выделить слои, добавить ссылки на слои. Добавление пакетов нужных для дальнейшей разработки:
Разбить на слои:
- Utility
- Business
- Data
- Entities
- Dto
- Interfaces
- Web
- Tests
Используемые пакеты:
- Entity Framework
- AutoMapper
- Identity",1.0,"Создать структуру проекта - Создать структуру проекта, выделить слои, добавить ссылки на слои. Добавление пакетов нужных для дальнейшей разработки:
Разбить на слои:
- Utility
- Business
- Data
- Entities
- Dto
- Interfaces
- Web
- Tests
Используемые пакеты:
- Entity Framework
- AutoMapper
- Identity",0,создать структуру проекта создать структуру проекта выделить слои добавить ссылки на слои добавление пакетов нужных для дальнейшей разработки разбить на слои utility business data entities dto interfaces web tests используемые пакеты entity framework automapper identity,0
805,10512756082.0,IssuesEvent,2019-09-27 18:44:13,microsoft/BotBuilder-Samples,https://api.github.com/repos/microsoft/BotBuilder-Samples,closed,[Teams] Provide a sample Teams bot that shows how to use message extensions with search-based command,4.6 P0 approved supportability teams,"Sample information
Sample type: samples
Sample language:
[ ] dotnetcore
[ ] nodejs
[ ] typescript
Sample name:
MessageExtWithSearchCmd
",True,"[Teams] Provide a sample Teams bot that shows how to use message extensions with search-based command - Sample information
Sample type: samples
Sample language:
[ ] dotnetcore
[ ] nodejs
[ ] typescript
Sample name:
MessageExtWithSearchCmd
",1, provide a sample teams bot that shows how to use message extensions with search based command sample information sample type samples sample language dotnetcore nodejs typescript sample name messageextwithsearchcmd ,1
1509,22153310100.0,IssuesEvent,2022-06-03 19:23:10,apache/beam,https://api.github.com/repos/apache/beam,opened,Pipeline proto seems to be incorrect for Combine.GroupedValues,portability P3 runner-dataflow sdk-java-core clarified sub-task,"It looks like CombineTest$BasicTests#testHotKeyCombining on Dataflow (and possibly other runners) is creating an invalid pipeline proto since the transform doesn't an environment (and possible a spec):
```
I0610 16:05:23.791430 14054 fnapi_instruction_graph_rewriter.cc:230] transforms {
I0610 16:05:23.791402
14054 fnapi_instruction_graph_rewriter.cc:230] key: ""HotMean/PostCombine/Combine.GroupedValues""
I0610
16:05:23.791404 14054 fnapi_instruction_graph_rewriter.cc:230] value {
I0610 16:05:23.791406
14054 fnapi_instruction_graph_rewriter.cc:230] inputs {
I0610 16:05:23.791408 14054 fnapi_instruction_graph_rewriter.cc:230]
key: ""org.apache.beam.sdk.values.PCollection.:400#56b99bb29b40d50c""
I0610 16:05:23.791410
14054 fnapi_instruction_graph_rewriter.cc:230] value: ""HotMean/PostCombine/GroupByKey.out""
I0610
16:05:23.791412 14054 fnapi_instruction_graph_rewriter.cc:230] }
I0610 16:05:23.791414 14054
fnapi_instruction_graph_rewriter.cc:230] outputs {
I0610 16:05:23.791416 14054 fnapi_instruction_graph_rewriter.cc:230]
key: ""org.apache.beam.sdk.values.PCollection.:400#4fa4d31096ca160c""
I0610 16:05:23.791419
14054 fnapi_instruction_graph_rewriter.cc:230] value: ""HotMean/PostCombine/Combine.GroupedValues/ParDo(Anonymous)/ParMultiDo(Anonymous).output""
I0610
16:05:23.791421 14054 fnapi_instruction_graph_rewriter.cc:230] }
I0610 16:05:23.791423 14054
fnapi_instruction_graph_rewriter.cc:230] unique_name: ""HotMean/PostCombine/Combine.GroupedValues""
I0610
16:05:23.791426 14054 fnapi_instruction_graph_rewriter.cc:230] }
I0610 16:05:23.791428 14054
fnapi_instruction_graph_rewriter.cc:230] }
```
Imported from Jira [BEAM-10266](https://issues.apache.org/jira/browse/BEAM-10266). Original Jira may contain additional context.
Reported by: lcwik.
Subtask of issue #18583",True,"Pipeline proto seems to be incorrect for Combine.GroupedValues - It looks like CombineTest$BasicTests#testHotKeyCombining on Dataflow (and possibly other runners) is creating an invalid pipeline proto since the transform doesn't an environment (and possible a spec):
```
I0610 16:05:23.791430 14054 fnapi_instruction_graph_rewriter.cc:230] transforms {
I0610 16:05:23.791402
14054 fnapi_instruction_graph_rewriter.cc:230] key: ""HotMean/PostCombine/Combine.GroupedValues""
I0610
16:05:23.791404 14054 fnapi_instruction_graph_rewriter.cc:230] value {
I0610 16:05:23.791406
14054 fnapi_instruction_graph_rewriter.cc:230] inputs {
I0610 16:05:23.791408 14054 fnapi_instruction_graph_rewriter.cc:230]
key: ""org.apache.beam.sdk.values.PCollection.:400#56b99bb29b40d50c""
I0610 16:05:23.791410
14054 fnapi_instruction_graph_rewriter.cc:230] value: ""HotMean/PostCombine/GroupByKey.out""
I0610
16:05:23.791412 14054 fnapi_instruction_graph_rewriter.cc:230] }
I0610 16:05:23.791414 14054
fnapi_instruction_graph_rewriter.cc:230] outputs {
I0610 16:05:23.791416 14054 fnapi_instruction_graph_rewriter.cc:230]
key: ""org.apache.beam.sdk.values.PCollection.:400#4fa4d31096ca160c""
I0610 16:05:23.791419
14054 fnapi_instruction_graph_rewriter.cc:230] value: ""HotMean/PostCombine/Combine.GroupedValues/ParDo(Anonymous)/ParMultiDo(Anonymous).output""
I0610
16:05:23.791421 14054 fnapi_instruction_graph_rewriter.cc:230] }
I0610 16:05:23.791423 14054
fnapi_instruction_graph_rewriter.cc:230] unique_name: ""HotMean/PostCombine/Combine.GroupedValues""
I0610
16:05:23.791426 14054 fnapi_instruction_graph_rewriter.cc:230] }
I0610 16:05:23.791428 14054
fnapi_instruction_graph_rewriter.cc:230] }
```
Imported from Jira [BEAM-10266](https://issues.apache.org/jira/browse/BEAM-10266). Original Jira may contain additional context.
Reported by: lcwik.
Subtask of issue #18583",1,pipeline proto seems to be incorrect for combine groupedvalues it looks like combinetest basictests testhotkeycombining on dataflow and possibly other runners is creating an invalid pipeline proto since the transform doesn t an environment and possible a spec fnapi instruction graph rewriter cc transforms fnapi instruction graph rewriter cc key hotmean postcombine combine groupedvalues fnapi instruction graph rewriter cc value fnapi instruction graph rewriter cc inputs fnapi instruction graph rewriter cc key org apache beam sdk values pcollection fnapi instruction graph rewriter cc value hotmean postcombine groupbykey out fnapi instruction graph rewriter cc fnapi instruction graph rewriter cc outputs fnapi instruction graph rewriter cc key org apache beam sdk values pcollection fnapi instruction graph rewriter cc value hotmean postcombine combine groupedvalues pardo anonymous parmultido anonymous output fnapi instruction graph rewriter cc fnapi instruction graph rewriter cc unique name hotmean postcombine combine groupedvalues fnapi instruction graph rewriter cc fnapi instruction graph rewriter cc imported from jira original jira may contain additional context reported by lcwik subtask of issue ,1
742815,25870846048.0,IssuesEvent,2022-12-14 02:27:00,OrderN/CONQUEST-release,https://api.github.com/repos/OrderN/CONQUEST-release,closed,Small change when setting DM.SolutionMethod ,area: main-source improves: stability priority: minor time: hours type: bug,"If you would set
DM.SolutionMethod diag
by mistake, your calculations would be O(N) mode, not diagonalisation, though the default option of DM.SolutionMethod is ""diagon"".
This is because the corresponding part (initial_read_module.f90) is
! Solution method - O(N) or diagonalisation ?
method = fdf_string(6,'DM.SolutionMethod','diagon')
if(leqi(method,'diagon')) then
flag_diagonalisation = .true.
flag_check_Diag = .true.
else
flag_diagonalisation = .false.
flag_check_Diag = .false.
end if
Thus, I think we should change this part as following,
! Solution method - O(N) or diagonalisation ?
method = fdf_string(6,'DM.SolutionMethod','diagon')
if(leqi(method,’ordern')) then
flag_diagonalisation = .false.
flag_check_Diag = .false.
else
flag_diagonalisation = .true.
flag_check_Diag = .true.
end if
What do you think ?
(Can I add ""area:input"" ?
And.. This is not a bug, to be strict.)",1.0,"Small change when setting DM.SolutionMethod - If you would set
DM.SolutionMethod diag
by mistake, your calculations would be O(N) mode, not diagonalisation, though the default option of DM.SolutionMethod is ""diagon"".
This is because the corresponding part (initial_read_module.f90) is
! Solution method - O(N) or diagonalisation ?
method = fdf_string(6,'DM.SolutionMethod','diagon')
if(leqi(method,'diagon')) then
flag_diagonalisation = .true.
flag_check_Diag = .true.
else
flag_diagonalisation = .false.
flag_check_Diag = .false.
end if
Thus, I think we should change this part as following,
! Solution method - O(N) or diagonalisation ?
method = fdf_string(6,'DM.SolutionMethod','diagon')
if(leqi(method,’ordern')) then
flag_diagonalisation = .false.
flag_check_Diag = .false.
else
flag_diagonalisation = .true.
flag_check_Diag = .true.
end if
What do you think ?
(Can I add ""area:input"" ?
And.. This is not a bug, to be strict.)",0,small change when setting dm solutionmethod if you would set dm solutionmethod diag by mistake your calculations would be o n mode not diagonalisation though the default option of dm solutionmethod is diagon this is because the corresponding part initial read module is solution method o n or diagonalisation method fdf string dm solutionmethod diagon if leqi method diagon then flag diagonalisation true flag check diag true else flag diagonalisation false flag check diag false end if thus i think we should change this part as following solution method o n or diagonalisation method fdf string dm solutionmethod diagon if leqi method ’ordern then flag diagonalisation false flag check diag false else flag diagonalisation true flag check diag true end if what do you think can i add area input and this is not a bug to be strict ,0
93353,19184791522.0,IssuesEvent,2021-12-05 01:57:07,CSC207-UofT/course-project-group-010,https://api.github.com/repos/CSC207-UofT/course-project-group-010,closed,Misplaced rating value bound check,code smell,"Currently, whether a user-provided rating value is in-bounds is checked in [CourseManager](https://github.com/CSC207-UofT/course-project-group-010/blob/6e75d460d87626a94aa4c54594e901fa8b586628/src/main/java/usecase/CourseManager.java) (lines 59-61).
This feels like a violation of Clean Architecture principles: why should CourseManager care about how Rating is implemented?
**Suggested solution:**
1. CourseManager calls Rating constructor with parsed user rating value **in a try-except block**.
2. Check in-bounds condition in Rating constructor.
a. If in-bounds, create Rating object normally.
b. If out-of-bounds, throw an exception.
3. CourseManager catches the exception if one is thrown and rethrows it up to the command line. Otherwise, proceed normally.",1.0,"Misplaced rating value bound check - Currently, whether a user-provided rating value is in-bounds is checked in [CourseManager](https://github.com/CSC207-UofT/course-project-group-010/blob/6e75d460d87626a94aa4c54594e901fa8b586628/src/main/java/usecase/CourseManager.java) (lines 59-61).
This feels like a violation of Clean Architecture principles: why should CourseManager care about how Rating is implemented?
**Suggested solution:**
1. CourseManager calls Rating constructor with parsed user rating value **in a try-except block**.
2. Check in-bounds condition in Rating constructor.
a. If in-bounds, create Rating object normally.
b. If out-of-bounds, throw an exception.
3. CourseManager catches the exception if one is thrown and rethrows it up to the command line. Otherwise, proceed normally.",0,misplaced rating value bound check currently whether a user provided rating value is in bounds is checked in lines this feels like a violation of clean architecture principles why should coursemanager care about how rating is implemented suggested solution coursemanager calls rating constructor with parsed user rating value in a try except block check in bounds condition in rating constructor a if in bounds create rating object normally b if out of bounds throw an exception coursemanager catches the exception if one is thrown and rethrows it up to the command line otherwise proceed normally ,0
430425,12453055539.0,IssuesEvent,2020-05-27 13:18:12,ooni/probe,https://api.github.com/repos/ooni/probe,closed,Better thread handling,effort/M ooni/probe-mobile platform/ios priority/low,"The tests in the iOS don't run in the order we queue them, probably is due to some multithreading issue or poor thread managment, this issue will aim to solve this problem.
https://github.com/ooni/probe-ios/pull/358",1.0,"Better thread handling - The tests in the iOS don't run in the order we queue them, probably is due to some multithreading issue or poor thread managment, this issue will aim to solve this problem.
https://github.com/ooni/probe-ios/pull/358",0,better thread handling the tests in the ios don t run in the order we queue them probably is due to some multithreading issue or poor thread managment this issue will aim to solve this problem ,0
530,7481562091.0,IssuesEvent,2018-04-04 21:04:45,chapel-lang/chapel,https://api.github.com/repos/chapel-lang/chapel,opened,Replace easy_install usage with pip,area: BTR type: Portability,"Historically, we have used `easy_install` as our project dependency over `pip`, since it came with python installation. Some time has passed since that decision was made... Now `pip` is included with python installations (since Python 2.7.9), and `easy_install` is deprecated, so we should make the switch in our build system.
This change is primarily motivated by `easy_install` not using the latest TLS. PyPI has deprecated TLS versions below 1.2, and will deny all deprecated requests starting on April 8th (currently doing brownouts).
We currently use `easy_install` to install `virtualenv` for `make test-venv` and `make chpldoc`. After April 8th, these make targets may no longer work on certain systems (e.g. OS X), until we replace `easy_install` with `pip`.",True,"Replace easy_install usage with pip - Historically, we have used `easy_install` as our project dependency over `pip`, since it came with python installation. Some time has passed since that decision was made... Now `pip` is included with python installations (since Python 2.7.9), and `easy_install` is deprecated, so we should make the switch in our build system.
This change is primarily motivated by `easy_install` not using the latest TLS. PyPI has deprecated TLS versions below 1.2, and will deny all deprecated requests starting on April 8th (currently doing brownouts).
We currently use `easy_install` to install `virtualenv` for `make test-venv` and `make chpldoc`. After April 8th, these make targets may no longer work on certain systems (e.g. OS X), until we replace `easy_install` with `pip`.",1,replace easy install usage with pip historically we have used easy install as our project dependency over pip since it came with python installation some time has passed since that decision was made now pip is included with python installations since python and easy install is deprecated so we should make the switch in our build system this change is primarily motivated by easy install not using the latest tls pypi has deprecated tls versions below and will deny all deprecated requests starting on april currently doing brownouts we currently use easy install to install virtualenv for make test venv and make chpldoc after april these make targets may no longer work on certain systems e g os x until we replace easy install with pip ,1
907,11942872538.0,IssuesEvent,2020-04-02 21:52:26,Azure/azure-functions-host,https://api.github.com/repos/Azure/azure-functions-host,closed,Option to configure Maximum request length on HttpTrigger ,Supportability improvement,"When input exceeds 25Mb, HttpTrigger fails with Maximum request length exceeded error.
Similar question on [SO](http://stackoverflow.com/questions/41189588/how-to-set-maxreceivedmessagesize-for-azure-functions)
#### Repro steps
Provide the steps required to reproduce the problem
1. Create HttpTrigger-CSharp
2. Invoke with input string >25MB
#### Expected behavior
Expose option to configure Maximum request length
#### Actual behavior
Fails with error:
Microsoft.Azure.WebJobs.Host.FunctionInvocationException: Exception while executing function: Functions.HttpTriggerCSharp1 ---> System.InvalidOperationException: Exception binding parameter 'req' ---> System.Web.HttpException: Maximum request length exceeded.
at System.Web.HttpBufferlessInputStream.ValidateRequestEntityLength()
at System.Web.HttpBufferlessInputStream.GetPreloadedContent(Byte[] buffer, Int32& offset, Int32& count)
at System.Web.HttpBufferlessInputStream.BeginRead(Byte[] buffer, Int32 offset, Int32 count, AsyncCallback callback, Object state)
at System.Web.Http.NonOwnedStream.BeginRead(Byte[] buffer, Int32 offset, Int32 count, AsyncCallback callback, Object state)
at System.Net.Http.StreamToStreamCopy.StartRead()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Microsoft.Azure.WebJobs.Script.Binding.HttpTriggerAttributeBindingProvider.HttpTriggerBinding.d__15.MoveNext()
",True,"Option to configure Maximum request length on HttpTrigger - When input exceeds 25Mb, HttpTrigger fails with Maximum request length exceeded error.
Similar question on [SO](http://stackoverflow.com/questions/41189588/how-to-set-maxreceivedmessagesize-for-azure-functions)
#### Repro steps
Provide the steps required to reproduce the problem
1. Create HttpTrigger-CSharp
2. Invoke with input string >25MB
#### Expected behavior
Expose option to configure Maximum request length
#### Actual behavior
Fails with error:
Microsoft.Azure.WebJobs.Host.FunctionInvocationException: Exception while executing function: Functions.HttpTriggerCSharp1 ---> System.InvalidOperationException: Exception binding parameter 'req' ---> System.Web.HttpException: Maximum request length exceeded.
at System.Web.HttpBufferlessInputStream.ValidateRequestEntityLength()
at System.Web.HttpBufferlessInputStream.GetPreloadedContent(Byte[] buffer, Int32& offset, Int32& count)
at System.Web.HttpBufferlessInputStream.BeginRead(Byte[] buffer, Int32 offset, Int32 count, AsyncCallback callback, Object state)
at System.Web.Http.NonOwnedStream.BeginRead(Byte[] buffer, Int32 offset, Int32 count, AsyncCallback callback, Object state)
at System.Net.Http.StreamToStreamCopy.StartRead()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Microsoft.Azure.WebJobs.Script.Binding.HttpTriggerAttributeBindingProvider.HttpTriggerBinding.d__15.MoveNext()
",1,option to configure maximum request length on httptrigger when input exceeds httptrigger fails with maximum request length exceeded error similar question on repro steps provide the steps required to reproduce the problem create httptrigger csharp invoke with input string expected behavior expose option to configure maximum request length actual behavior fails with error microsoft azure webjobs host functioninvocationexception exception while executing function functions system invalidoperationexception exception binding parameter req system web httpexception maximum request length exceeded at system web httpbufferlessinputstream validaterequestentitylength at system web httpbufferlessinputstream getpreloadedcontent byte buffer offset count at system web httpbufferlessinputstream beginread byte buffer offset count asynccallback callback object state at system web http nonownedstream beginread byte buffer offset count asynccallback callback object state at system net http streamtostreamcopy startread end of stack trace from previous location where exception was thrown at system runtime compilerservices taskawaiter throwfornonsuccess task task at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task at microsoft azure webjobs script binding httptriggerattributebindingprovider httptriggerbinding d movenext ,1
562,7869186115.0,IssuesEvent,2018-06-24 10:44:34,MiKTeX/miktex,https://api.github.com/repos/MiKTeX/miktex,closed,MiKTeX portable creates start menu entries,bug portable workaround,"Although MiKTeX portable is meant to ""[leave no traces on the host computer](https://miktex.org/portable)"", start menu entries are created while installing or updating packages. If I manually remove them, they get recreated on the next update.

",True,"MiKTeX portable creates start menu entries - Although MiKTeX portable is meant to ""[leave no traces on the host computer](https://miktex.org/portable)"", start menu entries are created while installing or updating packages. If I manually remove them, they get recreated on the next update.

",1,miktex portable creates start menu entries although miktex portable is meant to start menu entries are created while installing or updating packages if i manually remove them they get recreated on the next update ,1
15234,5087877132.0,IssuesEvent,2016-12-31 11:08:26,SleepyTrousers/EnderIO,https://api.github.com/repos/SleepyTrousers/EnderIO,closed,[Suggestion] Liquid XP -> Mending,1.10 Code Complete enhancement,"I would like to be able to use my massive supply of Liquid XP to repair items that have the Mending enchantment. Compatibility with Tinkers' Construct Mending Moss would also be appreciated. I can think of three possible implementations.
1) Glass bottles can be filled with liquid XP to make Bottle o' Enchanting bottles. The Minecraft wiki states that a Bottle o' Enchanting will drop somewhere between 3 and 11 experience, and it would be quite reasonable for this to be a slightly inefficient method of mending tools. I would suggest each bottle costs around 10 to 15 experience points worth of liquid xp (Ideally 11+ to ensure no cases where you might get more xp than you started out with).
2) Experience Obelisk gets 'drop as XP orbs' button(s).
3) Liquid XP can be used for mending directly in certain Ender IO machines. This could be added to the Experience Obelisk; or a new machine could be added, like a Liquid XP Infuser. The Killer Joe (or an upgraded version of it) could also accept liquid XP for mending and remove all experience orb attracting as an alternative fix for #3823.",1.0,"[Suggestion] Liquid XP -> Mending - I would like to be able to use my massive supply of Liquid XP to repair items that have the Mending enchantment. Compatibility with Tinkers' Construct Mending Moss would also be appreciated. I can think of three possible implementations.
1) Glass bottles can be filled with liquid XP to make Bottle o' Enchanting bottles. The Minecraft wiki states that a Bottle o' Enchanting will drop somewhere between 3 and 11 experience, and it would be quite reasonable for this to be a slightly inefficient method of mending tools. I would suggest each bottle costs around 10 to 15 experience points worth of liquid xp (Ideally 11+ to ensure no cases where you might get more xp than you started out with).
2) Experience Obelisk gets 'drop as XP orbs' button(s).
3) Liquid XP can be used for mending directly in certain Ender IO machines. This could be added to the Experience Obelisk; or a new machine could be added, like a Liquid XP Infuser. The Killer Joe (or an upgraded version of it) could also accept liquid XP for mending and remove all experience orb attracting as an alternative fix for #3823.",0, liquid xp mending i would like to be able to use my massive supply of liquid xp to repair items that have the mending enchantment compatibility with tinkers construct mending moss would also be appreciated i can think of three possible implementations glass bottles can be filled with liquid xp to make bottle o enchanting bottles the minecraft wiki states that a bottle o enchanting will drop somewhere between and experience and it would be quite reasonable for this to be a slightly inefficient method of mending tools i would suggest each bottle costs around to experience points worth of liquid xp ideally to ensure no cases where you might get more xp than you started out with experience obelisk gets drop as xp orbs button s liquid xp can be used for mending directly in certain ender io machines this could be added to the experience obelisk or a new machine could be added like a liquid xp infuser the killer joe or an upgraded version of it could also accept liquid xp for mending and remove all experience orb attracting as an alternative fix for ,0
1957,30645138596.0,IssuesEvent,2023-07-25 03:33:35,lawmurray/doxide,https://api.github.com/repos/lawmurray/doxide,closed,Replace getopt_long(),good first issue portability & packaging,"Doxide currently uses POSIX `getopt_long()` to parse command-line options. This should be replaced with something portable, such as [CLI11](https://github.com/CLIUtils/CLI11), to enable Windows support.",True,"Replace getopt_long() - Doxide currently uses POSIX `getopt_long()` to parse command-line options. This should be replaced with something portable, such as [CLI11](https://github.com/CLIUtils/CLI11), to enable Windows support.",1,replace getopt long doxide currently uses posix getopt long to parse command line options this should be replaced with something portable such as to enable windows support ,1
1666,24014719514.0,IssuesEvent,2022-09-14 22:42:40,facebookincubator/velox,https://api.github.com/repos/facebookincubator/velox,closed,Make SIMD support optional in favor of portability,portability stale,The current Velox implementation uses SIMD instructions specific to Intel. We must make this use optional in favor of portability to other hardware such as the Mac M1 processors. ,True,Make SIMD support optional in favor of portability - The current Velox implementation uses SIMD instructions specific to Intel. We must make this use optional in favor of portability to other hardware such as the Mac M1 processors. ,1,make simd support optional in favor of portability the current velox implementation uses simd instructions specific to intel we must make this use optional in favor of portability to other hardware such as the mac processors ,1
1831,26986486726.0,IssuesEvent,2023-02-09 16:30:04,golang/vulndb,https://api.github.com/repos/golang/vulndb,closed,x/vulndb: potential Go vuln in github.com/argoproj/argo-cd: GHSA-q9hr-j4rf-8fjc,excluded: NOT_IMPORTABLE,"In GitHub Security Advisory [GHSA-q9hr-j4rf-8fjc](https://github.com/advisories/GHSA-q9hr-j4rf-8fjc), there is a vulnerability in the following Go packages or modules:
| Unit | Fixed | Vulnerable Ranges |
| - | - | - |
| [github.com/argoproj/argo-cd](https://pkg.go.dev/github.com/argoproj/argo-cd) | 2.6.0-rc5 | >= 2.6.0-rc1, < 2.6.0-rc5 || [github.com/argoproj/argo-cd](https://pkg.go.dev/github.com/argoproj/argo-cd) | 2.5.8 | >= 2.5.0, < 2.5.8 || [github.com/argoproj/argo-cd](https://pkg.go.dev/github.com/argoproj/argo-cd) | 2.4.20 | >= 2.4.0, < 2.4.20 || [github.com/argoproj/argo-cd](https://pkg.go.dev/github.com/argoproj/argo-cd) | 2.3.14 | >= 1.8.2, < 2.3.14 |
Cross references:
- Module github.com/argoproj/argo-cd appears in issue #304 EFFECTIVELY_PRIVATE
- Module github.com/argoproj/argo-cd appears in issue #357 EFFECTIVELY_PRIVATE
- Module github.com/argoproj/argo-cd appears in issue #358 EFFECTIVELY_PRIVATE
- Module github.com/argoproj/argo-cd appears in issue #359 EFFECTIVELY_PRIVATE
- Module github.com/argoproj/argo-cd appears in issue #387 EFFECTIVELY_PRIVATE
- Module github.com/argoproj/argo-cd appears in issue #453 EFFECTIVELY_PRIVATE
- Module github.com/argoproj/argo-cd appears in issue #454 EFFECTIVELY_PRIVATE
- Module github.com/argoproj/argo-cd appears in issue #455 EFFECTIVELY_PRIVATE
- Module github.com/argoproj/argo-cd appears in issue #495 EFFECTIVELY_PRIVATE
- Module github.com/argoproj/argo-cd appears in issue #497 EFFECTIVELY_PRIVATE
- Module github.com/argoproj/argo-cd appears in issue #498 EFFECTIVELY_PRIVATE
- Module github.com/argoproj/argo-cd appears in issue #499 EFFECTIVELY_PRIVATE
- Module github.com/argoproj/argo-cd appears in issue #516 EFFECTIVELY_PRIVATE
- Module github.com/argoproj/argo-cd appears in issue #517 EFFECTIVELY_PRIVATE
- Module github.com/argoproj/argo-cd appears in issue #518 EFFECTIVELY_PRIVATE
- Module github.com/argoproj/argo-cd appears in issue #882 NOT_IMPORTABLE
- Module github.com/argoproj/argo-cd appears in issue #892 NOT_IMPORTABLE
See [doc/triage.md](https://github.com/golang/vulndb/blob/master/doc/triage.md) for instructions on how to triage this report.
```
modules:
- module: github.com/argoproj/argo-cd
versions:
- introduced: 2.6.0-rc1
fixed: 2.6.0-rc5
packages:
- package: github.com/argoproj/argo-cd
- module: github.com/argoproj/argo-cd
versions:
- introduced: 2.5.0
fixed: 2.5.8
packages:
- package: github.com/argoproj/argo-cd
- module: github.com/argoproj/argo-cd
versions:
- introduced: 2.4.0
fixed: 2.4.20
packages:
- package: github.com/argoproj/argo-cd
- module: github.com/argoproj/argo-cd
versions:
- introduced: 1.8.2
fixed: 2.3.14
packages:
- package: github.com/argoproj/argo-cd
description: ""### Impact\n\nAll versions of Argo CD starting with v1.8.2 are vulnerable
to an improper authorization bug causing the API to accept certain invalid tokens.\n\nOIDC
providers include an `aud` (audience) claim in signed tokens. The value of that
claim specifies the intended audience(s) of the token (i.e. the service or services
which are meant to accept the token). Argo CD _does_ validate that the token was
signed by Argo CD's configured OIDC provider. But Argo CD _does not_ validate
the audience claim, so it will accept tokens that are not intended for Argo CD.\n\nIf
Argo CD's configured OIDC provider also serves other audiences (for example, a
file storage service), then Argo CD will accept a token intended for one of those
other audiences. Argo CD will grant the user privileges based on the token's `groups`
claim, even though those groups were not intended to be used by Argo CD.\n\nThis
bug also increases the blast radius of a stolen token. If an attacker steals a
valid token for a different audience, they can use it to access Argo CD.\n\n###
Patches\n\nA patch for this vulnerability has been released in the following Argo
CD versions:\n\n* v2.6.0-rc5\n* v2.5.8\n* v2.4.20\n* v2.3.14\n\nThe patch introduces
a new `allowedAudiences` to the OIDC config block. By default, the client ID is
the only allowed audience. Users who _want_ Argo CD to accept tokens intended
for a different audience may use `allowedAudiences` to specify those audiences.\n\n```yaml\napiVersion:
v1\nkind: ConfigMap\nmetadata:\n name: argocd-cm\ndata:\n oidc.config: |\n name:
Example\n allowedAudiences:\n - audience-1\n - audience-2\n - argocd-client-id
\ # If `allowedAudiences` is non-empty, Argo CD's client ID must be explicitly
added if you want to allow it.\n```\n\nEven though [the OIDC spec requires the
audience claim](https://openid.net/specs/openid-connect-core-1_0.html#IDToken),
some tokens may not include it. To avoid a breaking change in a patch release,
versions < 2.6.0 of Argo CD will skip the audience claim check for tokens that
have no audience. In versions >= 2.6.0, Argo CD will reject all tokens which do
not have an audience claim. Users can opt into the old behavior by setting an
option:\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: argocd-cm\ndata:\n
\ oidc.config: |\n name: Example\n skipAudienceCheckWhenTokenHasNoAudience:
true\n```\n\n### Workarounds\n\nThere is no workaround besides upgrading.\n\n###
Credits \n\nThe Argo CD team would like to express their gratitude to Vladimir
Pouzanov (@farcaller) from Indeed, who discovered the issue, reported it confidentially
according to our [guidelines](https://github.com/argoproj/argo-cd/blob/master/SECURITY.md#reporting-a-vulnerability),
and actively worked with the project to provide a remedy. Many thanks to Vladimir!\n\n###
References\n\n* [How to configure OIDC in Argo CD](https://argo-cd.readthedocs.io/en/latest/operator-manual/user-management/#existing-oidc-provider)\n*
[OIDC spec section discussing the audience claim](https://openid.net/specs/openid-connect-core-1_0.html#IDToken)\n*
[JWT spec section discussing the audience claim](https://www.rfc-editor.org/rfc/rfc7519#section-4.1.3)\n\n###
For more information\n\n* Open an issue in [the Argo CD issue tracker](https://github.com/argoproj/argo-cd/issues)
or [discussions](https://github.com/argoproj/argo-cd/discussions)\n* Join us on
[Slack](https://argoproj.github.io/community/join-slack) in channel #argo-cd\n""
cves:
- CVE-2023-22482
ghsas:
- GHSA-q9hr-j4rf-8fjc
```",True,"x/vulndb: potential Go vuln in github.com/argoproj/argo-cd: GHSA-q9hr-j4rf-8fjc - In GitHub Security Advisory [GHSA-q9hr-j4rf-8fjc](https://github.com/advisories/GHSA-q9hr-j4rf-8fjc), there is a vulnerability in the following Go packages or modules:
| Unit | Fixed | Vulnerable Ranges |
| - | - | - |
| [github.com/argoproj/argo-cd](https://pkg.go.dev/github.com/argoproj/argo-cd) | 2.6.0-rc5 | >= 2.6.0-rc1, < 2.6.0-rc5 || [github.com/argoproj/argo-cd](https://pkg.go.dev/github.com/argoproj/argo-cd) | 2.5.8 | >= 2.5.0, < 2.5.8 || [github.com/argoproj/argo-cd](https://pkg.go.dev/github.com/argoproj/argo-cd) | 2.4.20 | >= 2.4.0, < 2.4.20 || [github.com/argoproj/argo-cd](https://pkg.go.dev/github.com/argoproj/argo-cd) | 2.3.14 | >= 1.8.2, < 2.3.14 |
Cross references:
- Module github.com/argoproj/argo-cd appears in issue #304 EFFECTIVELY_PRIVATE
- Module github.com/argoproj/argo-cd appears in issue #357 EFFECTIVELY_PRIVATE
- Module github.com/argoproj/argo-cd appears in issue #358 EFFECTIVELY_PRIVATE
- Module github.com/argoproj/argo-cd appears in issue #359 EFFECTIVELY_PRIVATE
- Module github.com/argoproj/argo-cd appears in issue #387 EFFECTIVELY_PRIVATE
- Module github.com/argoproj/argo-cd appears in issue #453 EFFECTIVELY_PRIVATE
- Module github.com/argoproj/argo-cd appears in issue #454 EFFECTIVELY_PRIVATE
- Module github.com/argoproj/argo-cd appears in issue #455 EFFECTIVELY_PRIVATE
- Module github.com/argoproj/argo-cd appears in issue #495 EFFECTIVELY_PRIVATE
- Module github.com/argoproj/argo-cd appears in issue #497 EFFECTIVELY_PRIVATE
- Module github.com/argoproj/argo-cd appears in issue #498 EFFECTIVELY_PRIVATE
- Module github.com/argoproj/argo-cd appears in issue #499 EFFECTIVELY_PRIVATE
- Module github.com/argoproj/argo-cd appears in issue #516 EFFECTIVELY_PRIVATE
- Module github.com/argoproj/argo-cd appears in issue #517 EFFECTIVELY_PRIVATE
- Module github.com/argoproj/argo-cd appears in issue #518 EFFECTIVELY_PRIVATE
- Module github.com/argoproj/argo-cd appears in issue #882 NOT_IMPORTABLE
- Module github.com/argoproj/argo-cd appears in issue #892 NOT_IMPORTABLE
See [doc/triage.md](https://github.com/golang/vulndb/blob/master/doc/triage.md) for instructions on how to triage this report.
```
modules:
- module: github.com/argoproj/argo-cd
versions:
- introduced: 2.6.0-rc1
fixed: 2.6.0-rc5
packages:
- package: github.com/argoproj/argo-cd
- module: github.com/argoproj/argo-cd
versions:
- introduced: 2.5.0
fixed: 2.5.8
packages:
- package: github.com/argoproj/argo-cd
- module: github.com/argoproj/argo-cd
versions:
- introduced: 2.4.0
fixed: 2.4.20
packages:
- package: github.com/argoproj/argo-cd
- module: github.com/argoproj/argo-cd
versions:
- introduced: 1.8.2
fixed: 2.3.14
packages:
- package: github.com/argoproj/argo-cd
description: ""### Impact\n\nAll versions of Argo CD starting with v1.8.2 are vulnerable
to an improper authorization bug causing the API to accept certain invalid tokens.\n\nOIDC
providers include an `aud` (audience) claim in signed tokens. The value of that
claim specifies the intended audience(s) of the token (i.e. the service or services
which are meant to accept the token). Argo CD _does_ validate that the token was
signed by Argo CD's configured OIDC provider. But Argo CD _does not_ validate
the audience claim, so it will accept tokens that are not intended for Argo CD.\n\nIf
Argo CD's configured OIDC provider also serves other audiences (for example, a
file storage service), then Argo CD will accept a token intended for one of those
other audiences. Argo CD will grant the user privileges based on the token's `groups`
claim, even though those groups were not intended to be used by Argo CD.\n\nThis
bug also increases the blast radius of a stolen token. If an attacker steals a
valid token for a different audience, they can use it to access Argo CD.\n\n###
Patches\n\nA patch for this vulnerability has been released in the following Argo
CD versions:\n\n* v2.6.0-rc5\n* v2.5.8\n* v2.4.20\n* v2.3.14\n\nThe patch introduces
a new `allowedAudiences` to the OIDC config block. By default, the client ID is
the only allowed audience. Users who _want_ Argo CD to accept tokens intended
for a different audience may use `allowedAudiences` to specify those audiences.\n\n```yaml\napiVersion:
v1\nkind: ConfigMap\nmetadata:\n name: argocd-cm\ndata:\n oidc.config: |\n name:
Example\n allowedAudiences:\n - audience-1\n - audience-2\n - argocd-client-id
\ # If `allowedAudiences` is non-empty, Argo CD's client ID must be explicitly
added if you want to allow it.\n```\n\nEven though [the OIDC spec requires the
audience claim](https://openid.net/specs/openid-connect-core-1_0.html#IDToken),
some tokens may not include it. To avoid a breaking change in a patch release,
versions < 2.6.0 of Argo CD will skip the audience claim check for tokens that
have no audience. In versions >= 2.6.0, Argo CD will reject all tokens which do
not have an audience claim. Users can opt into the old behavior by setting an
option:\n\n```yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: argocd-cm\ndata:\n
\ oidc.config: |\n name: Example\n skipAudienceCheckWhenTokenHasNoAudience:
true\n```\n\n### Workarounds\n\nThere is no workaround besides upgrading.\n\n###
Credits \n\nThe Argo CD team would like to express their gratitude to Vladimir
Pouzanov (@farcaller) from Indeed, who discovered the issue, reported it confidentially
according to our [guidelines](https://github.com/argoproj/argo-cd/blob/master/SECURITY.md#reporting-a-vulnerability),
and actively worked with the project to provide a remedy. Many thanks to Vladimir!\n\n###
References\n\n* [How to configure OIDC in Argo CD](https://argo-cd.readthedocs.io/en/latest/operator-manual/user-management/#existing-oidc-provider)\n*
[OIDC spec section discussing the audience claim](https://openid.net/specs/openid-connect-core-1_0.html#IDToken)\n*
[JWT spec section discussing the audience claim](https://www.rfc-editor.org/rfc/rfc7519#section-4.1.3)\n\n###
For more information\n\n* Open an issue in [the Argo CD issue tracker](https://github.com/argoproj/argo-cd/issues)
or [discussions](https://github.com/argoproj/argo-cd/discussions)\n* Join us on
[Slack](https://argoproj.github.io/community/join-slack) in channel #argo-cd\n""
cves:
- CVE-2023-22482
ghsas:
- GHSA-q9hr-j4rf-8fjc
```",1,x vulndb potential go vuln in github com argoproj argo cd ghsa in github security advisory there is a vulnerability in the following go packages or modules unit fixed vulnerable ranges cross references module github com argoproj argo cd appears in issue effectively private module github com argoproj argo cd appears in issue effectively private module github com argoproj argo cd appears in issue effectively private module github com argoproj argo cd appears in issue effectively private module github com argoproj argo cd appears in issue effectively private module github com argoproj argo cd appears in issue effectively private module github com argoproj argo cd appears in issue effectively private module github com argoproj argo cd appears in issue effectively private module github com argoproj argo cd appears in issue effectively private module github com argoproj argo cd appears in issue effectively private module github com argoproj argo cd appears in issue effectively private module github com argoproj argo cd appears in issue effectively private module github com argoproj argo cd appears in issue effectively private module github com argoproj argo cd appears in issue effectively private module github com argoproj argo cd appears in issue effectively private module github com argoproj argo cd appears in issue not importable module github com argoproj argo cd appears in issue not importable see for instructions on how to triage this report modules module github com argoproj argo cd versions introduced fixed packages package github com argoproj argo cd module github com argoproj argo cd versions introduced fixed packages package github com argoproj argo cd module github com argoproj argo cd versions introduced fixed packages package github com argoproj argo cd module github com argoproj argo cd versions introduced fixed packages package github com argoproj argo cd description impact n nall versions of argo cd starting with are vulnerable to an improper authorization bug causing the api to accept certain invalid tokens n noidc providers include an aud audience claim in signed tokens the value of that claim specifies the intended audience s of the token i e the service or services which are meant to accept the token argo cd does validate that the token was signed by argo cd s configured oidc provider but argo cd does not validate the audience claim so it will accept tokens that are not intended for argo cd n nif argo cd s configured oidc provider also serves other audiences for example a file storage service then argo cd will accept a token intended for one of those other audiences argo cd will grant the user privileges based on the token s groups claim even though those groups were not intended to be used by argo cd n nthis bug also increases the blast radius of a stolen token if an attacker steals a valid token for a different audience they can use it to access argo cd n n patches n na patch for this vulnerability has been released in the following argo cd versions n n n n n n nthe patch introduces a new allowedaudiences to the oidc config block by default the client id is the only allowed audience users who want argo cd to accept tokens intended for a different audience may use allowedaudiences to specify those audiences n n yaml napiversion nkind configmap nmetadata n name argocd cm ndata n oidc config n name example n allowedaudiences n audience n audience n argocd client id if allowedaudiences is non empty argo cd s client id must be explicitly added if you want to allow it n n neven though the oidc spec requires the audience claim some tokens may not include it to avoid a breaking change in a patch release versions of argo cd will skip the audience claim check for tokens that have no audience in versions argo cd will reject all tokens which do not have an audience claim users can opt into the old behavior by setting an option n n yaml napiversion nkind configmap nmetadata n name argocd cm ndata n oidc config n name example n skipaudiencecheckwhentokenhasnoaudience true n n n workarounds n nthere is no workaround besides upgrading n n credits n nthe argo cd team would like to express their gratitude to vladimir pouzanov farcaller from indeed who discovered the issue reported it confidentially according to our and actively worked with the project to provide a remedy many thanks to vladimir n n references n n for more information n n open an issue in or join us on in channel argo cd n cves cve ghsas ghsa ,1
22255,2648484842.0,IssuesEvent,2015-03-14 00:16:03,prikhi/pencil,https://api.github.com/repos/prikhi/pencil,closed,XML Parsing Error when exporting document,2–5 stars bug imported Priority-Medium,"_From [paul.b.d...@gmail.com](https://code.google.com/u/105322434964372756384/) on February 23, 2010 12:46:32_
What steps will reproduce the problem? 1. Install standalone pencil
2. Install Document Export template for Simple HTML
3. Start new document
4. Select Document->Export Document
5. Select ""Single web page""
6. Select ""All pages in the document""
7. Select ""Default HTML Template"" What is the expected output? What do you see instead? Expect a html file somewhere yet I get the following error:
XML Parsing Error: undefined entity
Location: chrome://mozapps/content/downloads/unknownContentType.xul
Line Number 30, Column 18: \&intro.label;
-----------------^ What version of the product are you using? On what operating system? Version 1.1 build 1
On Windows XP SP2 Please provide any additional information below.
_Original issue: http://code.google.com/p/evoluspencil/issues/detail?id=149_",1.0,"XML Parsing Error when exporting document - _From [paul.b.d...@gmail.com](https://code.google.com/u/105322434964372756384/) on February 23, 2010 12:46:32_
What steps will reproduce the problem? 1. Install standalone pencil
2. Install Document Export template for Simple HTML
3. Start new document
4. Select Document->Export Document
5. Select ""Single web page""
6. Select ""All pages in the document""
7. Select ""Default HTML Template"" What is the expected output? What do you see instead? Expect a html file somewhere yet I get the following error:
XML Parsing Error: undefined entity
Location: chrome://mozapps/content/downloads/unknownContentType.xul
Line Number 30, Column 18: \&intro.label;
-----------------^ What version of the product are you using? On what operating system? Version 1.1 build 1
On Windows XP SP2 Please provide any additional information below.
_Original issue: http://code.google.com/p/evoluspencil/issues/detail?id=149_",0,xml parsing error when exporting document from on february what steps will reproduce the problem install standalone pencil install document export template for simple html start new document select document export document select single web page select all pages in the document select default html template what is the expected output what do you see instead expect a html file somewhere yet i get the following error xml parsing error undefined entity location chrome mozapps content downloads unknowncontenttype xul line number column intro label what version of the product are you using on what operating system version build on windows xp please provide any additional information below original issue ,0
1711,24929989096.0,IssuesEvent,2022-10-31 10:46:54,AzureAD/microsoft-authentication-library-for-dotnet,https://api.github.com/repos/AzureAD/microsoft-authentication-library-for-dotnet,closed,[Bug] Region failures not well logged,bug P2 Supportability,"msal 4.44
1. Create CCA with region set to auto-detection. Set logger to Info.
2. AcquireTokenForClient in an env not on Azure (e.g. personal dev machine)
3. AcquireTokenForClient again.
Actual: in the log I see
## first call
`[Region discovery] Auto-discovery successful but found null or empty region.` **info** (this is misleading)
`[Region discovery] Region from REGION_NAME env variable not detected. 6/7/2022 6:10:03 PM` (verbose)
HTTP expection for calling IMDS (verbose)
## subsequent calls
[Region discovery] Not using a regional authority.
Expected:
## first call
status of discovery from env variable and IMDS
## first and subsequent calls
reason for auto-discovery failure (at WARN level)
",True,"[Bug] Region failures not well logged - msal 4.44
1. Create CCA with region set to auto-detection. Set logger to Info.
2. AcquireTokenForClient in an env not on Azure (e.g. personal dev machine)
3. AcquireTokenForClient again.
Actual: in the log I see
## first call
`[Region discovery] Auto-discovery successful but found null or empty region.` **info** (this is misleading)
`[Region discovery] Region from REGION_NAME env variable not detected. 6/7/2022 6:10:03 PM` (verbose)
HTTP expection for calling IMDS (verbose)
## subsequent calls
[Region discovery] Not using a regional authority.
Expected:
## first call
status of discovery from env variable and IMDS
## first and subsequent calls
reason for auto-discovery failure (at WARN level)
",1, region failures not well logged msal create cca with region set to auto detection set logger to info acquiretokenforclient in an env not on azure e g personal dev machine acquiretokenforclient again actual in the log i see first call auto discovery successful but found null or empty region info this is misleading region from region name env variable not detected pm verbose http expection for calling imds verbose subsequent calls not using a regional authority expected first call status of discovery from env variable and imds first and subsequent calls reason for auto discovery failure at warn level ,1
350720,24997458583.0,IssuesEvent,2022-11-03 02:49:19,Sam-Radnus/PyCalling,https://api.github.com/repos/Sam-Radnus/PyCalling,opened,Update Docs,documentation good first issue,"remove the following from the docs
Host Registration and Login
[Host Registration](https://socioauth-login.herokuapp.com/api/register/)
[Host Login](https://socioauth-login.herokuapp.com/api/token/)
click 'Join as a Host' to enter room as a host''
replace
the first preview image with the screenshot of the current landing page",1.0,"Update Docs - remove the following from the docs
Host Registration and Login
[Host Registration](https://socioauth-login.herokuapp.com/api/register/)
[Host Login](https://socioauth-login.herokuapp.com/api/token/)
click 'Join as a Host' to enter room as a host''
replace
the first preview image with the screenshot of the current landing page",0,update docs remove the following from the docs host registration and login click join as a host to enter room as a host replace the first preview image with the screenshot of the current landing page,0
606,8180021275.0,IssuesEvent,2018-08-28 18:10:29,chapel-lang/chapel,https://api.github.com/repos/chapel-lang/chapel,closed,CHPL_LIBMODE=shared breaks parallel `make -j`,area: Makefiles type: Portability user issue,"Clean git directory. Commit a5828c85a78425f95473be.
No fancy environment variables other than `CHPL_LIBMODE=shared` (building single-locale).
```bash
cd $CHPL_HOME # Git directory.
git clean -Xfd .
make -j
```
When doing a parallel build with `make -j` and **`CHPL_LIBMODE=shared`**, the build breaks consistently, always, and exactly (sample size = 5) when it gets to this part:
```terminal
...
cd <$CHPL_HOME>/third-party/qthread/build/linux64-gnu-native-flat-jemalloc-hwloc && make
...
Making all in src
CC cacheline.lo
CC envariables.lo
CC feb.lo
CC hazardptrs.lo
...
CCPAS fastcontext/asm.lo
CC fastcontext/context.lo
<$CHPL_HOME>/third-party/qthread/qthread-src/src/alloc/chapel.c:18:22: fatal error: chpl-mem.h: No such file or directory
#include ""chpl-mem.h""
^
compilation terminated.
make[7]: *** [alloc/chapel.lo] Error 1
make[7]: *** Waiting for unfinished jobs....
<$CHPL_HOME>/third-party/qthread/qthread-src/src/affinity/hwloc_via_chapel.c:7:23: fatal error: chpl-topo.h: No such file or directory
#include ""chpl-topo.h""
^
compilation terminated.
make[7]: *** [affinity/hwloc_via_chapel.lo] Error 1
make[6]: *** [all-recursive] Error 1
make[5]: *** [all-recursive] Error 1
make[4]: *** [qthread-build] Error 2
make[3]: *** [<$CHPL_HOME>third-party/qthread/install/linux64-gnu-native-flat-jemalloc-hwloc] Error 2
make[2]: *** [third-party-pkgs] Error 2
make[1]: *** [runtime] Error 2
make: *** [comprt] Error 2
```
Notes:
1. This error does not happen when `CHPL_LIBMODE` is unset (sample size = 1).
2. If I run `make -j` again right after the failure, the build completes successfully.",True,"CHPL_LIBMODE=shared breaks parallel `make -j` - Clean git directory. Commit a5828c85a78425f95473be.
No fancy environment variables other than `CHPL_LIBMODE=shared` (building single-locale).
```bash
cd $CHPL_HOME # Git directory.
git clean -Xfd .
make -j
```
When doing a parallel build with `make -j` and **`CHPL_LIBMODE=shared`**, the build breaks consistently, always, and exactly (sample size = 5) when it gets to this part:
```terminal
...
cd <$CHPL_HOME>/third-party/qthread/build/linux64-gnu-native-flat-jemalloc-hwloc && make
...
Making all in src
CC cacheline.lo
CC envariables.lo
CC feb.lo
CC hazardptrs.lo
...
CCPAS fastcontext/asm.lo
CC fastcontext/context.lo
<$CHPL_HOME>/third-party/qthread/qthread-src/src/alloc/chapel.c:18:22: fatal error: chpl-mem.h: No such file or directory
#include ""chpl-mem.h""
^
compilation terminated.
make[7]: *** [alloc/chapel.lo] Error 1
make[7]: *** Waiting for unfinished jobs....
<$CHPL_HOME>/third-party/qthread/qthread-src/src/affinity/hwloc_via_chapel.c:7:23: fatal error: chpl-topo.h: No such file or directory
#include ""chpl-topo.h""
^
compilation terminated.
make[7]: *** [affinity/hwloc_via_chapel.lo] Error 1
make[6]: *** [all-recursive] Error 1
make[5]: *** [all-recursive] Error 1
make[4]: *** [qthread-build] Error 2
make[3]: *** [<$CHPL_HOME>third-party/qthread/install/linux64-gnu-native-flat-jemalloc-hwloc] Error 2
make[2]: *** [third-party-pkgs] Error 2
make[1]: *** [runtime] Error 2
make: *** [comprt] Error 2
```
Notes:
1. This error does not happen when `CHPL_LIBMODE` is unset (sample size = 1).
2. If I run `make -j` again right after the failure, the build completes successfully.",1,chpl libmode shared breaks parallel make j clean git directory commit no fancy environment variables other than chpl libmode shared building single locale bash cd chpl home git directory git clean xfd make j when doing a parallel build with make j and chpl libmode shared the build breaks consistently always and exactly sample size when it gets to this part terminal cd third party qthread build gnu native flat jemalloc hwloc make making all in src cc cacheline lo cc envariables lo cc feb lo cc hazardptrs lo ccpas fastcontext asm lo cc fastcontext context lo third party qthread qthread src src alloc chapel c fatal error chpl mem h no such file or directory include chpl mem h compilation terminated make error make waiting for unfinished jobs third party qthread qthread src src affinity hwloc via chapel c fatal error chpl topo h no such file or directory include chpl topo h compilation terminated make error make error make error make error make error make error make error make error notes this error does not happen when chpl libmode is unset sample size if i run make j again right after the failure the build completes successfully ,1
41211,12831769206.0,IssuesEvent,2020-07-07 06:16:03,rvvergara/todolist-frontend-igaku,https://api.github.com/repos/rvvergara/todolist-frontend-igaku,closed,CVE-2018-19837 (Medium) detected in node-sass-4.13.0.tgz,security vulnerability,"## CVE-2018-19837 - Medium Severity Vulnerability
Vulnerable Library - node-sass-4.13.0.tgz
Wrapper around libsass
Library home page: https://registry.npmjs.org/node-sass/-/node-sass-4.13.0.tgz
Path to dependency file: /tmp/ws-scm/todolist-frontend-igaku/package.json
Path to vulnerable library: /todolist-frontend-igaku/node_modules/node-sass/package.json
Dependency Hierarchy:
- :x: **node-sass-4.13.0.tgz** (Vulnerable Library)
Found in HEAD commit: b17e5f3a30530082b47eaec4f1bfc545245f9563
Vulnerability Details
In LibSass prior to 3.5.5, Sass::Eval::operator()(Sass::Binary_Expression*) inside eval.cpp allows attackers to cause a denial-of-service resulting from stack consumption via a crafted sass file, because of certain incorrect parsing of '%' as a modulo operator in parser.cpp.
Publish Date: 2018-12-04
URL: CVE-2018-19837
CVSS 3 Score Details (6.5 )
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
For more information on CVSS3 Scores, click here .
Suggested Fix
Type: Upgrade version
Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-19837
Fix Resolution: 3.5.5
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2018-19837 (Medium) detected in node-sass-4.13.0.tgz - ## CVE-2018-19837 - Medium Severity Vulnerability
Vulnerable Library - node-sass-4.13.0.tgz
Wrapper around libsass
Library home page: https://registry.npmjs.org/node-sass/-/node-sass-4.13.0.tgz
Path to dependency file: /tmp/ws-scm/todolist-frontend-igaku/package.json
Path to vulnerable library: /todolist-frontend-igaku/node_modules/node-sass/package.json
Dependency Hierarchy:
- :x: **node-sass-4.13.0.tgz** (Vulnerable Library)
Found in HEAD commit: b17e5f3a30530082b47eaec4f1bfc545245f9563
Vulnerability Details
In LibSass prior to 3.5.5, Sass::Eval::operator()(Sass::Binary_Expression*) inside eval.cpp allows attackers to cause a denial-of-service resulting from stack consumption via a crafted sass file, because of certain incorrect parsing of '%' as a modulo operator in parser.cpp.
Publish Date: 2018-12-04
URL: CVE-2018-19837
CVSS 3 Score Details (6.5 )
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
For more information on CVSS3 Scores, click here .
Suggested Fix
Type: Upgrade version
Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-19837
Fix Resolution: 3.5.5
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve medium detected in node sass tgz cve medium severity vulnerability vulnerable library node sass tgz wrapper around libsass library home page a href path to dependency file tmp ws scm todolist frontend igaku package json path to vulnerable library todolist frontend igaku node modules node sass package json dependency hierarchy x node sass tgz vulnerable library found in head commit a href vulnerability details in libsass prior to sass eval operator sass binary expression inside eval cpp allows attackers to cause a denial of service resulting from stack consumption via a crafted sass file because of certain incorrect parsing of as a modulo operator in parser cpp publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href fix resolution step up your open source security game with whitesource ,0
413959,27974200336.0,IssuesEvent,2023-03-25 11:30:20,falconry/falcon,https://api.github.com/repos/falconry/falcon,closed,Allow to customize not_found responder,documentation needs contributor,"AFAIK, there is [no way](https://github.com/falconry/falcon/blob/5fcfbd7d095d88d88027047998850f3503300ab8/falcon/api.py#L499) now to customize the responder when a request has no registered responder.
I suggest registering this responder as falcon.API attribute (so it's easy to override it when inheriting from falcon.API) and/or allowing to pass it as init kwargs (of course defaulting to `falcon.responder.not_found`).
I can work on a PR.
Thoughts?
",1.0,"Allow to customize not_found responder - AFAIK, there is [no way](https://github.com/falconry/falcon/blob/5fcfbd7d095d88d88027047998850f3503300ab8/falcon/api.py#L499) now to customize the responder when a request has no registered responder.
I suggest registering this responder as falcon.API attribute (so it's easy to override it when inheriting from falcon.API) and/or allowing to pass it as init kwargs (of course defaulting to `falcon.responder.not_found`).
I can work on a PR.
Thoughts?
",0,allow to customize not found responder afaik there is now to customize the responder when a request has no registered responder i suggest registering this responder as falcon api attribute so it s easy to override it when inheriting from falcon api and or allowing to pass it as init kwargs of course defaulting to falcon responder not found i can work on a pr thoughts ,0
502612,14562866528.0,IssuesEvent,2020-12-17 01:04:52,codeRIT/hackathon-manager,https://api.github.com/repos/codeRIT/hackathon-manager,opened,Allow agreements to be fully customizable,2.1.2 high priority,"Currently we only allow URLs for agreements. To comply with the MLH Member Agreement, members are required to phrase agreements in a certain format. This format is currently not supported with the new agreement model.
Reference: https://docs.google.com/document/d/1K7HSIEO8tA7vbD0dtwvesMOhDAea8nDKJlu3f3kS8pE/edit",1.0,"Allow agreements to be fully customizable - Currently we only allow URLs for agreements. To comply with the MLH Member Agreement, members are required to phrase agreements in a certain format. This format is currently not supported with the new agreement model.
Reference: https://docs.google.com/document/d/1K7HSIEO8tA7vbD0dtwvesMOhDAea8nDKJlu3f3kS8pE/edit",0,allow agreements to be fully customizable currently we only allow urls for agreements to comply with the mlh member agreement members are required to phrase agreements in a certain format this format is currently not supported with the new agreement model reference ,0
395967,11699305564.0,IssuesEvent,2020-03-06 15:24:48,webcompat/web-bugs,https://api.github.com/repos/webcompat/web-bugs,closed,www.amazon.fr - site is not usable,browser-fenix engine-gecko ml-needsdiagnosis-false priority-important,"
**URL**: https://www.amazon.fr/ap/signin
**Browser / Version**: Firefox Mobile 75.0
**Operating System**: Android
**Tested Another Browser**: Yes
**Problem type**: Site is not usable
**Description**: Keeps saying incorrect password while it is correct
**Steps to Reproduce**:
Amazon shows incorrect password error, while the password is good. At the same time, I'm receiving an email with confirmation code. This is the same process as on desktop, except that on desktop I am presented with a prompt to input the confirmation code.
Browser Configuration
_From [webcompat.com](https://webcompat.com/) with ❤️_",1.0,"www.amazon.fr - site is not usable -
**URL**: https://www.amazon.fr/ap/signin
**Browser / Version**: Firefox Mobile 75.0
**Operating System**: Android
**Tested Another Browser**: Yes
**Problem type**: Site is not usable
**Description**: Keeps saying incorrect password while it is correct
**Steps to Reproduce**:
Amazon shows incorrect password error, while the password is good. At the same time, I'm receiving an email with confirmation code. This is the same process as on desktop, except that on desktop I am presented with a prompt to input the confirmation code.
Browser Configuration
_From [webcompat.com](https://webcompat.com/) with ❤️_",0, site is not usable url browser version firefox mobile operating system android tested another browser yes problem type site is not usable description keeps saying incorrect password while it is correct steps to reproduce amazon shows incorrect password error while the password is good at the same time i m receiving an email with confirmation code this is the same process as on desktop except that on desktop i am presented with a prompt to input the confirmation code browser configuration none from with ❤️ ,0
785,10348824905.0,IssuesEvent,2019-09-04 20:45:11,Azure/azure-webjobs-sdk,https://api.github.com/repos/Azure/azure-webjobs-sdk,opened,Detect if the v2 ApplicationInsights Service for v2 has been replaced by customer dependency injection.,Supportability,"Customers are mistakenly overwriting the Application Insights dependency in Startup which results in missing data and dependencies not showing up in the Application Map.
This is referred to in the documentation here.
https://docs.microsoft.com/en-us/azure/azure-functions/functions-dotnet-dependency-injection#logging-services
and here
https://docs.microsoft.com/en-us/azure/azure-functions/functions-monitoring#version-2x-3
#### Repro steps
Provide the steps required to reproduce the problem
1. Create a New Functions Project
2. Add a Startup class.
3. Inject your own instance of TelemetryClient
serviceCollection.AddSingleton(sp =>
{
var configuration = TelemetryConfiguration.CreateDefault();
configuration.TelemetryProcessorChainBuilder.Use(next =>
{
var qpProcessor = new QuickPulseTelemetryProcessor(next);
qpProcessor.Initialize(configuration);
return qpProcessor;
});
#### Expected behavior
The runtime can log an error indicating that this may be missing and we can build a detector to share the above links with the customer.
#### Actual behavior
The Application Insights data and dependency tracking is affected and currently this needs a code review to detect and fix.
#### Known workarounds
No workarounds exist.
",True,"Detect if the v2 ApplicationInsights Service for v2 has been replaced by customer dependency injection. - Customers are mistakenly overwriting the Application Insights dependency in Startup which results in missing data and dependencies not showing up in the Application Map.
This is referred to in the documentation here.
https://docs.microsoft.com/en-us/azure/azure-functions/functions-dotnet-dependency-injection#logging-services
and here
https://docs.microsoft.com/en-us/azure/azure-functions/functions-monitoring#version-2x-3
#### Repro steps
Provide the steps required to reproduce the problem
1. Create a New Functions Project
2. Add a Startup class.
3. Inject your own instance of TelemetryClient
serviceCollection.AddSingleton(sp =>
{
var configuration = TelemetryConfiguration.CreateDefault();
configuration.TelemetryProcessorChainBuilder.Use(next =>
{
var qpProcessor = new QuickPulseTelemetryProcessor(next);
qpProcessor.Initialize(configuration);
return qpProcessor;
});
#### Expected behavior
The runtime can log an error indicating that this may be missing and we can build a detector to share the above links with the customer.
#### Actual behavior
The Application Insights data and dependency tracking is affected and currently this needs a code review to detect and fix.
#### Known workarounds
No workarounds exist.
",1,detect if the applicationinsights service for has been replaced by customer dependency injection customers are mistakenly overwriting the application insights dependency in startup which results in missing data and dependencies not showing up in the application map this is referred to in the documentation here and here repro steps provide the steps required to reproduce the problem create a new functions project add a startup class inject your own instance of telemetryclient servicecollection addsingleton sp var configuration telemetryconfiguration createdefault configuration telemetryprocessorchainbuilder use next var qpprocessor new quickpulsetelemetryprocessor next qpprocessor initialize configuration return qpprocessor expected behavior the runtime can log an error indicating that this may be missing and we can build a detector to share the above links with the customer actual behavior the application insights data and dependency tracking is affected and currently this needs a code review to detect and fix known workarounds no workarounds exist ,1
177645,14640675489.0,IssuesEvent,2020-12-25 03:11:14,BGround/Web-Front-End-Interview,https://api.github.com/repos/BGround/Web-Front-End-Interview,opened,ES6入门之let和const,ES6 documentation,"**let和var声明的是变量,const声明的是常量**
常量:只能取值不能赋值,就是只读不写
常量的声明:
```javascript
var es = 'ES6';
```
let和var的区别
① let不允许重复声明,var可以
```javascript
var str = 'es6';
var str = 'es2015';
console.log(str);
let es = 'es6';
let es = 'es2015';
console.log(es);
//Uncaught SyntaxError: Identifier 'es' has already been declared
```
② let不属于顶层对象window,var可以
```javascript
let str = 'es6';
console.log(window.str); //undefined
```
③ 不存在变量提升[5.JS的执行上下文](#JS的执行上下文)
```javascript
console.log(str);
let str = 'es2015';
//Uncaught ReferenceError: Cannot access 'str' before initialization
console.log(str); var str;
var str = 'es2015'; => console.log(str);
str = 'es6';
//结果是undefined,具体细节参照执行上下文分析
```
④ 暂时性死区,在代码块内,使用let命令声明变量之前,该变量都是不可用的
```javascript
if(true) {
console.log(str);
let str = 'es6';//Uncaught ReferenceError: Cannot access 'str' before initialization
}
```
因为暂时性死区, 所以要特别注意typeof的使用
【拓展】
var的创建和初始化被提升,赋值不会被提升。
let的创建被提升,初始化和赋值不会被提升。
function的创建、初始化和赋值均会被提升。
⑤ 块级作用域
```javascript
if(true) {
let str = 'es6';
}
console.log(str);
//Uncaught ReferenceError: str is not defined
```
###### let的特性const都有,不能重复声明、不存在变量提升、存在块级作用域也有暂时性死区,唯一的区别const声明常量,一旦声明无法修改。
###### 但是有一点特别注意,上面的列子中不能修改的都是基本数据类型,对于数组或者对象都是引用数据类型,引用数据类型可以修改的是存在堆内存中的数据
```javascript
const esObj = {
name: 'es6',
year: 2015
}
esObj.name = 'es2015';
console.log(esObj);
const arr = ['es6', 'es7', 'es8'];
arr[0] = 'es2015';
console.log(arr);
```
###### let定义的变量,只能在块作用域里访问,不能跨块作用域访问,也不能跨函数作用域访问,ES6新增的命令
###### var定义的变量,可以跨块作用域访问, 不能跨函数作用域访问,
###### const用来定义常量,创建时必须赋值,只能在块作用域里访问,并且不能修改引用数据类型的指向地址,但是可以修改引用数据指向地址内的数据。
",1.0,"ES6入门之let和const - **let和var声明的是变量,const声明的是常量**
常量:只能取值不能赋值,就是只读不写
常量的声明:
```javascript
var es = 'ES6';
```
let和var的区别
① let不允许重复声明,var可以
```javascript
var str = 'es6';
var str = 'es2015';
console.log(str);
let es = 'es6';
let es = 'es2015';
console.log(es);
//Uncaught SyntaxError: Identifier 'es' has already been declared
```
② let不属于顶层对象window,var可以
```javascript
let str = 'es6';
console.log(window.str); //undefined
```
③ 不存在变量提升[5.JS的执行上下文](#JS的执行上下文)
```javascript
console.log(str);
let str = 'es2015';
//Uncaught ReferenceError: Cannot access 'str' before initialization
console.log(str); var str;
var str = 'es2015'; => console.log(str);
str = 'es6';
//结果是undefined,具体细节参照执行上下文分析
```
④ 暂时性死区,在代码块内,使用let命令声明变量之前,该变量都是不可用的
```javascript
if(true) {
console.log(str);
let str = 'es6';//Uncaught ReferenceError: Cannot access 'str' before initialization
}
```
因为暂时性死区, 所以要特别注意typeof的使用
【拓展】
var的创建和初始化被提升,赋值不会被提升。
let的创建被提升,初始化和赋值不会被提升。
function的创建、初始化和赋值均会被提升。
⑤ 块级作用域
```javascript
if(true) {
let str = 'es6';
}
console.log(str);
//Uncaught ReferenceError: str is not defined
```
###### let的特性const都有,不能重复声明、不存在变量提升、存在块级作用域也有暂时性死区,唯一的区别const声明常量,一旦声明无法修改。
###### 但是有一点特别注意,上面的列子中不能修改的都是基本数据类型,对于数组或者对象都是引用数据类型,引用数据类型可以修改的是存在堆内存中的数据
```javascript
const esObj = {
name: 'es6',
year: 2015
}
esObj.name = 'es2015';
console.log(esObj);
const arr = ['es6', 'es7', 'es8'];
arr[0] = 'es2015';
console.log(arr);
```
###### let定义的变量,只能在块作用域里访问,不能跨块作用域访问,也不能跨函数作用域访问,ES6新增的命令
###### var定义的变量,可以跨块作用域访问, 不能跨函数作用域访问,
###### const用来定义常量,创建时必须赋值,只能在块作用域里访问,并且不能修改引用数据类型的指向地址,但是可以修改引用数据指向地址内的数据。
",0, let和var声明的是变量,const声明的是常量 常量:只能取值不能赋值,就是只读不写 常量的声明: javascript var es let和var的区别 ① let不允许重复声明,var可以 javascript var str var str console log str let es let es console log es uncaught syntaxerror identifier es has already been declared ② let不属于顶层对象window var可以 javascript let str console log window str undefined ③ 不存在变量提升 js的执行上下文 javascript console log str let str uncaught referenceerror cannot access str before initialization console log str var str var str console log str str 结果是undefined,具体细节参照执行上下文分析 ④ 暂时性死区 在代码块内,使用let命令声明变量之前,该变量都是不可用的 javascript if true console log str let str uncaught referenceerror cannot access str before initialization 因为暂时性死区 所以要特别注意typeof的使用 【拓展】 var的创建和初始化被提升,赋值不会被提升。 let的创建被提升,初始化和赋值不会被提升。 function的创建、初始化和赋值均会被提升。 ⑤ 块级作用域 javascript if true let str console log str uncaught referenceerror str is not defined let的特性const都有,不能重复声明、不存在变量提升、存在块级作用域也有暂时性死区,唯一的区别const声明常量,一旦声明无法修改。 但是有一点特别注意,上面的列子中不能修改的都是基本数据类型,对于数组或者对象都是引用数据类型,引用数据类型可以修改的是存在堆内存中的数据 javascript const esobj name year esobj name console log esobj const arr arr console log arr let定义的变量,只能在块作用域里访问,不能跨块作用域访问,也不能跨函数作用域访问 var定义的变量,可以跨块作用域访问 不能跨函数作用域访问 const用来定义常量,创建时必须赋值,只能在块作用域里访问,并且不能修改引用数据类型的指向地址,但是可以修改引用数据指向地址内的数据。 ,0
71208,18522995351.0,IssuesEvent,2021-10-20 16:58:42,golang/go,https://api.github.com/repos/golang/go,opened,x/build/dashboard: remove OpenBSD 6.4 builders,OS-OpenBSD Builders NeedsFix,"At this time, OpenBSD 7.0 (released 6 days ago) and 6.9 are the supported releases of OpenBSD per their support policy of maintaining the last 2 releases. OpenBSD 6.4 stopped being supported on October 17, 2019.
Go's OpenBSD support policy matches that of OpenBSD (https://golang.org/wiki/OpenBSD#longterm-support), so it's time to remove the OpenBSD 6.4 builders (for 386/amd64 archs). We'll have coverage from remaining OpenBSD 6.8 builders for same archs, plus ARM/MIPS ones, and any newer OpenBSD builders that are added.
CC @golang/release, @4a6f656c.",1.0,"x/build/dashboard: remove OpenBSD 6.4 builders - At this time, OpenBSD 7.0 (released 6 days ago) and 6.9 are the supported releases of OpenBSD per their support policy of maintaining the last 2 releases. OpenBSD 6.4 stopped being supported on October 17, 2019.
Go's OpenBSD support policy matches that of OpenBSD (https://golang.org/wiki/OpenBSD#longterm-support), so it's time to remove the OpenBSD 6.4 builders (for 386/amd64 archs). We'll have coverage from remaining OpenBSD 6.8 builders for same archs, plus ARM/MIPS ones, and any newer OpenBSD builders that are added.
CC @golang/release, @4a6f656c.",0,x build dashboard remove openbsd builders at this time openbsd released days ago and are the supported releases of openbsd per their support policy of maintaining the last releases openbsd stopped being supported on october go s openbsd support policy matches that of openbsd so it s time to remove the openbsd builders for archs we ll have coverage from remaining openbsd builders for same archs plus arm mips ones and any newer openbsd builders that are added cc golang release ,0
1767,2631744771.0,IssuesEvent,2015-03-07 11:32:56,Frege/frege,https://api.github.com/repos/Frege/frege,opened,lang-ref: chapter 5,documentation,"also not in the repo?
The long footnote: the argumentation around ""the current directory"" is a bit weak since in Java, one cannot change the current directory from within Java (only by using JNI to call into native code - but then one could also change env parameters that way.)",1.0,"lang-ref: chapter 5 - also not in the repo?
The long footnote: the argumentation around ""the current directory"" is a bit weak since in Java, one cannot change the current directory from within Java (only by using JNI to call into native code - but then one could also change env parameters that way.)",0,lang ref chapter also not in the repo the long footnote the argumentation around the current directory is a bit weak since in java one cannot change the current directory from within java only by using jni to call into native code but then one could also change env parameters that way ,0
380,6392819662.0,IssuesEvent,2017-08-04 04:38:40,UnixJunkie/parany,https://api.github.com/repos/UnixJunkie/parany,closed,Named semaphore size limit on OSX,portability,"turns out to be a bit short: [PSEMNAMLEN](https://github.com/st3fan/osx-10.9/blob/master/xnu-2422.1.72/bsd/sys/posix_sem.h#L54).
Can we `s/ocaml_//` [here](https://github.com/UnixJunkie/parany/blob/master/parany.ml#L25) ?
Or maybe (or also) change `pid` to `p` and `sem` to `s` ?",True,"Named semaphore size limit on OSX - turns out to be a bit short: [PSEMNAMLEN](https://github.com/st3fan/osx-10.9/blob/master/xnu-2422.1.72/bsd/sys/posix_sem.h#L54).
Can we `s/ocaml_//` [here](https://github.com/UnixJunkie/parany/blob/master/parany.ml#L25) ?
Or maybe (or also) change `pid` to `p` and `sem` to `s` ?",1,named semaphore size limit on osx turns out to be a bit short can we s ocaml or maybe or also change pid to p and sem to s ,1
549,7733692745.0,IssuesEvent,2018-05-26 14:57:04,systemd/systemd,https://api.github.com/repos/systemd/systemd,closed,There seems to be a memory leak in systemd-portabled,bug 🐛 has-pr ✨ portable,"I started to play with `portablectl` and came across a memory leak reported by `ASAN`. I think `attach` and `stop` should be enough to reproduce it:
```sh
-bash-4.4# /usr/lib/systemd/portablectl attach GIBBERISH
(Matching unit files with prefix 'GIBBERISH'.)
Failed to attach image: No image 'GIBBERISH' found.
-bash-4.4# systemctl stop systemd-portabled
```
Here is what I found in the journal:
```sh
systemd-portabled.service: Trying to enqueue job systemd-portabled.service/stop/replace
systemd-portabled.service: Installed new job systemd-portabled.service/stop as 335
systemd-portabled.service: Enqueued job systemd-portabled.service/stop as 335
systemd-portabled.service: Changed running -> stop-sigterm
Stopping Portable Service Manager...
systemd-portabled.service: D-Bus name org.freedesktop.portable1 no longer registered by :1.12
=================================================================
==152==ERROR: LeakSanitizer: detected memory leaks
Direct leak of 10 byte(s) in 1 object(s) allocated from:
#0 0x7f467b823238 in __interceptor_strdup (/lib64/libasan.so.4+0x77238)
#1 0x7f467abc222d in strv_extend ../src/basic/strv.c:501
#2 0x7f467ac70403 in bus_message_read_strv_extend ../src/libsystemd/sd-bus/bus-message.c:5503
#3 0x7f467ac706ff in sd_bus_message_read_strv ../src/libsystemd/sd-bus/bus-message.c:5525
#4 0x5594d5aa88f9 in bus_image_common_attach ../src/portable/portabled-image-bus.c:235
#5 0x5594d5aa58af in redirect_method_to_image ../src/portable/portabled-bus.c:215
#6 0x5594d5aa5bf4 in method_attach_image ../src/portable/portabled-bus.c:250
#7 0x7f467ac783a9 in method_callbacks_run ../src/libsystemd/sd-bus/bus-objects.c:407
#8 0x7f467ac7fa48 in object_find_and_run ../src/libsystemd/sd-bus/bus-objects.c:1265
#9 0x7f467ac80ca9 in bus_process_object ../src/libsystemd/sd-bus/bus-objects.c:1381
#10 0x7f467acc3e82 in process_message ../src/libsystemd/sd-bus/sd-bus.c:2666
#11 0x7f467acc42f4 in process_running ../src/libsystemd/sd-bus/sd-bus.c:2708
#12 0x7f467acc64a3 in bus_process_internal ../src/libsystemd/sd-bus/sd-bus.c:2927
#13 0x7f467acc666b in sd_bus_process ../src/libsystemd/sd-bus/sd-bus.c:2954
#14 0x7f467acc922c in io_callback ../src/libsystemd/sd-bus/sd-bus.c:3299
#15 0x7f467ad61e85 in source_dispatch ../src/libsystemd/sd-event/sd-event.c:2294
#16 0x7f467ad65bd2 in sd_event_dispatch ../src/libsystemd/sd-event/sd-event.c:2652
#17 0x7f467ad66829 in sd_event_run ../src/libsystemd/sd-event/sd-event.c:2709
#18 0x7f467a9d15db in bus_event_loop_with_idle ../src/shared/bus-util.c:124
#19 0x5594d5aaf406 in manager_run ../src/portable/portabled.c:117
#20 0x5594d5aaf9d1 in main ../src/portable/portabled.c:161
#21 0x7f46791eaf29 in __libc_start_main (/lib64/libc.so.6+0x20f29)
SUMMARY: AddressSanitizer: 10 byte(s) leaked in 1 allocation(s).
systemd-portabled.service: Child 152 belongs to systemd-portabled.service.
systemd-portabled.service: Main process exited, code=exited, status=1/FAILURE
systemd-portabled.service: Failed with result 'exit-code'.
systemd-portabled.service: Changed stop-sigterm -> failed
systemd-portabled.service: Job systemd-portabled.service/stop finished, result=done
Stopped Portable Service Manager.
systemd-portabled.service: Unit entered failed state.
```",True,"There seems to be a memory leak in systemd-portabled - I started to play with `portablectl` and came across a memory leak reported by `ASAN`. I think `attach` and `stop` should be enough to reproduce it:
```sh
-bash-4.4# /usr/lib/systemd/portablectl attach GIBBERISH
(Matching unit files with prefix 'GIBBERISH'.)
Failed to attach image: No image 'GIBBERISH' found.
-bash-4.4# systemctl stop systemd-portabled
```
Here is what I found in the journal:
```sh
systemd-portabled.service: Trying to enqueue job systemd-portabled.service/stop/replace
systemd-portabled.service: Installed new job systemd-portabled.service/stop as 335
systemd-portabled.service: Enqueued job systemd-portabled.service/stop as 335
systemd-portabled.service: Changed running -> stop-sigterm
Stopping Portable Service Manager...
systemd-portabled.service: D-Bus name org.freedesktop.portable1 no longer registered by :1.12
=================================================================
==152==ERROR: LeakSanitizer: detected memory leaks
Direct leak of 10 byte(s) in 1 object(s) allocated from:
#0 0x7f467b823238 in __interceptor_strdup (/lib64/libasan.so.4+0x77238)
#1 0x7f467abc222d in strv_extend ../src/basic/strv.c:501
#2 0x7f467ac70403 in bus_message_read_strv_extend ../src/libsystemd/sd-bus/bus-message.c:5503
#3 0x7f467ac706ff in sd_bus_message_read_strv ../src/libsystemd/sd-bus/bus-message.c:5525
#4 0x5594d5aa88f9 in bus_image_common_attach ../src/portable/portabled-image-bus.c:235
#5 0x5594d5aa58af in redirect_method_to_image ../src/portable/portabled-bus.c:215
#6 0x5594d5aa5bf4 in method_attach_image ../src/portable/portabled-bus.c:250
#7 0x7f467ac783a9 in method_callbacks_run ../src/libsystemd/sd-bus/bus-objects.c:407
#8 0x7f467ac7fa48 in object_find_and_run ../src/libsystemd/sd-bus/bus-objects.c:1265
#9 0x7f467ac80ca9 in bus_process_object ../src/libsystemd/sd-bus/bus-objects.c:1381
#10 0x7f467acc3e82 in process_message ../src/libsystemd/sd-bus/sd-bus.c:2666
#11 0x7f467acc42f4 in process_running ../src/libsystemd/sd-bus/sd-bus.c:2708
#12 0x7f467acc64a3 in bus_process_internal ../src/libsystemd/sd-bus/sd-bus.c:2927
#13 0x7f467acc666b in sd_bus_process ../src/libsystemd/sd-bus/sd-bus.c:2954
#14 0x7f467acc922c in io_callback ../src/libsystemd/sd-bus/sd-bus.c:3299
#15 0x7f467ad61e85 in source_dispatch ../src/libsystemd/sd-event/sd-event.c:2294
#16 0x7f467ad65bd2 in sd_event_dispatch ../src/libsystemd/sd-event/sd-event.c:2652
#17 0x7f467ad66829 in sd_event_run ../src/libsystemd/sd-event/sd-event.c:2709
#18 0x7f467a9d15db in bus_event_loop_with_idle ../src/shared/bus-util.c:124
#19 0x5594d5aaf406 in manager_run ../src/portable/portabled.c:117
#20 0x5594d5aaf9d1 in main ../src/portable/portabled.c:161
#21 0x7f46791eaf29 in __libc_start_main (/lib64/libc.so.6+0x20f29)
SUMMARY: AddressSanitizer: 10 byte(s) leaked in 1 allocation(s).
systemd-portabled.service: Child 152 belongs to systemd-portabled.service.
systemd-portabled.service: Main process exited, code=exited, status=1/FAILURE
systemd-portabled.service: Failed with result 'exit-code'.
systemd-portabled.service: Changed stop-sigterm -> failed
systemd-portabled.service: Job systemd-portabled.service/stop finished, result=done
Stopped Portable Service Manager.
systemd-portabled.service: Unit entered failed state.
```",1,there seems to be a memory leak in systemd portabled i started to play with portablectl and came across a memory leak reported by asan i think attach and stop should be enough to reproduce it sh bash usr lib systemd portablectl attach gibberish matching unit files with prefix gibberish failed to attach image no image gibberish found bash systemctl stop systemd portabled here is what i found in the journal sh systemd portabled service trying to enqueue job systemd portabled service stop replace systemd portabled service installed new job systemd portabled service stop as systemd portabled service enqueued job systemd portabled service stop as systemd portabled service changed running stop sigterm stopping portable service manager systemd portabled service d bus name org freedesktop no longer registered by error leaksanitizer detected memory leaks direct leak of byte s in object s allocated from in interceptor strdup libasan so in strv extend src basic strv c in bus message read strv extend src libsystemd sd bus bus message c in sd bus message read strv src libsystemd sd bus bus message c in bus image common attach src portable portabled image bus c in redirect method to image src portable portabled bus c in method attach image src portable portabled bus c in method callbacks run src libsystemd sd bus bus objects c in object find and run src libsystemd sd bus bus objects c in bus process object src libsystemd sd bus bus objects c in process message src libsystemd sd bus sd bus c in process running src libsystemd sd bus sd bus c in bus process internal src libsystemd sd bus sd bus c in sd bus process src libsystemd sd bus sd bus c in io callback src libsystemd sd bus sd bus c in source dispatch src libsystemd sd event sd event c in sd event dispatch src libsystemd sd event sd event c in sd event run src libsystemd sd event sd event c in bus event loop with idle src shared bus util c in manager run src portable portabled c in main src portable portabled c in libc start main libc so summary addresssanitizer byte s leaked in allocation s systemd portabled service child belongs to systemd portabled service systemd portabled service main process exited code exited status failure systemd portabled service failed with result exit code systemd portabled service changed stop sigterm failed systemd portabled service job systemd portabled service stop finished result done stopped portable service manager systemd portabled service unit entered failed state ,1
917,11983057598.0,IssuesEvent,2020-04-07 13:54:25,ToFuProject/tofu,https://api.github.com/repos/ToFuProject/tofu,closed,Change default separator in to_dict() from '_' to '.',Fixed in devel portability,"The default separator for flattening nested dict is currently `'_'`.
* It keeps users from defining attributes with `'_'` in the name
* A `'.'` would better render te nested structure
=> Switch from `'_'` to `'.'`
=> Ensure backward-retrocompatibility in `tf.load()` by implementing a general test",True,"Change default separator in to_dict() from '_' to '.' - The default separator for flattening nested dict is currently `'_'`.
* It keeps users from defining attributes with `'_'` in the name
* A `'.'` would better render te nested structure
=> Switch from `'_'` to `'.'`
=> Ensure backward-retrocompatibility in `tf.load()` by implementing a general test",1,change default separator in to dict from to the default separator for flattening nested dict is currently it keeps users from defining attributes with in the name a would better render te nested structure switch from to ensure backward retrocompatibility in tf load by implementing a general test,1
133,3487812984.0,IssuesEvent,2016-01-02 09:37:27,edenhill/librdkafka,https://api.github.com/repos/edenhill/librdkafka,closed,Getting errno from DLL,enhancement portability,"I'm building librdkafka as a DLL that contains static version of the run-time library (/MT).
The problem is that calling application can no longer get errno, since it's different run-time.
Is it possible to add new function to expose errno?
e.g.
`int rd_errno() { return errno; }`
Thanks.
",True,"Getting errno from DLL - I'm building librdkafka as a DLL that contains static version of the run-time library (/MT).
The problem is that calling application can no longer get errno, since it's different run-time.
Is it possible to add new function to expose errno?
e.g.
`int rd_errno() { return errno; }`
Thanks.
",1,getting errno from dll i m building librdkafka as a dll that contains static version of the run time library mt the problem is that calling application can no longer get errno since it s different run time is it possible to add new function to expose errno e g int rd errno return errno thanks ,1
141553,5437994424.0,IssuesEvent,2017-03-06 09:09:44,robotology/gazebo-yarp-plugins,https://api.github.com/repos/robotology/gazebo-yarp-plugins,closed,Test compilation against Gazebo 8 ,Complexity: Medium Priority: IFI Day Priority: Normal,"Gazebo 8 has been recently released ( http://gazebosim.org/blog/gazebo8 ) but it does not support Ubuntu 14.04 Trusty. As Travis only supports Trusty at the moment, we have no way of checking if `gazebo-yarp-plugins` compile against Gazebo 8.
By randomly searching only it seems that it should be possible to use Xenial on Travis using Docker, but I don't fully understand how this is supposed to work and which are the limitations. Perhaps @diegoferigo knows something more on this?",2.0,"Test compilation against Gazebo 8 - Gazebo 8 has been recently released ( http://gazebosim.org/blog/gazebo8 ) but it does not support Ubuntu 14.04 Trusty. As Travis only supports Trusty at the moment, we have no way of checking if `gazebo-yarp-plugins` compile against Gazebo 8.
By randomly searching only it seems that it should be possible to use Xenial on Travis using Docker, but I don't fully understand how this is supposed to work and which are the limitations. Perhaps @diegoferigo knows something more on this?",0,test compilation against gazebo gazebo has been recently released but it does not support ubuntu trusty as travis only supports trusty at the moment we have no way of checking if gazebo yarp plugins compile against gazebo by randomly searching only it seems that it should be possible to use xenial on travis using docker but i don t fully understand how this is supposed to work and which are the limitations perhaps diegoferigo knows something more on this ,0
134,3488370526.0,IssuesEvent,2016-01-02 22:11:20,svaarala/duktape,https://api.github.com/repos/svaarala/duktape,closed,Add support for using C++ exceptions instead of setjmp/longjmp to support RAII,enhancement portability,"Hello,
We have a problem with RAII and automatic destruction via stack unwinding because duktape use setjmp/longjmp.
Lua proposes to use real C++ exception instead of setjmp/longjmp if compiled in C++. Can we add this for duktape too?
The following code will never call the Object destructor from test function.
#include
#include ""duktape.h""
class Object
{
public:
Object() = default;
~Object()
{
puts(""Destroying object"");
}
};
duk_ret_t test(duk_context *ctx)
{
Object o;
duk_push_string(ctx, ""hello"");
duk_throw(ctx);
return 0;
}
int main(void)
{
auto ctx = duk_create_heap_default();
duk_push_global_object(ctx);
duk_push_c_function(ctx, test, 0);
duk_put_prop_string(ctx, -2, ""test"");
duk_peval_string(ctx, ""try { test(); } catch (ex) { print(ex); }"");
return 0;
}",True,"Add support for using C++ exceptions instead of setjmp/longjmp to support RAII - Hello,
We have a problem with RAII and automatic destruction via stack unwinding because duktape use setjmp/longjmp.
Lua proposes to use real C++ exception instead of setjmp/longjmp if compiled in C++. Can we add this for duktape too?
The following code will never call the Object destructor from test function.
#include
#include ""duktape.h""
class Object
{
public:
Object() = default;
~Object()
{
puts(""Destroying object"");
}
};
duk_ret_t test(duk_context *ctx)
{
Object o;
duk_push_string(ctx, ""hello"");
duk_throw(ctx);
return 0;
}
int main(void)
{
auto ctx = duk_create_heap_default();
duk_push_global_object(ctx);
duk_push_c_function(ctx, test, 0);
duk_put_prop_string(ctx, -2, ""test"");
duk_peval_string(ctx, ""try { test(); } catch (ex) { print(ex); }"");
return 0;
}",1,add support for using c exceptions instead of setjmp longjmp to support raii hello we have a problem with raii and automatic destruction via stack unwinding because duktape use setjmp longjmp lua proposes to use real c exception instead of setjmp longjmp if compiled in c can we add this for duktape too the following code will never call the object destructor from test function include include duktape h class object public object default object puts destroying object duk ret t test duk context ctx object o duk push string ctx hello duk throw ctx return int main void auto ctx duk create heap default duk push global object ctx duk push c function ctx test duk put prop string ctx test duk peval string ctx try test catch ex print ex return ,1
288052,31856946666.0,IssuesEvent,2023-09-15 08:10:30,nidhi7598/linux-4.19.72_CVE-2022-3564,https://api.github.com/repos/nidhi7598/linux-4.19.72_CVE-2022-3564,closed,CVE-2020-29370 (High) detected in linuxlinux-4.19.294 - autoclosed,Mend: dependency security vulnerability,"## CVE-2020-29370 - High Severity Vulnerability
Vulnerable Library - linuxlinux-4.19.294
The Linux Kernel
Library home page: https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux
Found in HEAD commit: 454c7dacf6fa9a6de86d4067f5a08f25cffa519b
Found in base branch: main
Vulnerable Source Files (2)
/mm/slub.c
/mm/slub.c
Vulnerability Details
An issue was discovered in kmem_cache_alloc_bulk in mm/slub.c in the Linux kernel before 5.5.11. The slowpath lacks the required TID increment, aka CID-fd4d9c7d0c71.
Publish Date: 2020-11-28
URL: CVE-2020-29370
CVSS 3 Score Details (7.0 )
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
For more information on CVSS3 Scores, click here .
Suggested Fix
Type: Upgrade version
Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-29370
Release Date: 2020-11-28
Fix Resolution: v5.6-rc7,v5.5.11,v5.4.28
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2020-29370 (High) detected in linuxlinux-4.19.294 - autoclosed - ## CVE-2020-29370 - High Severity Vulnerability
Vulnerable Library - linuxlinux-4.19.294
The Linux Kernel
Library home page: https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux
Found in HEAD commit: 454c7dacf6fa9a6de86d4067f5a08f25cffa519b
Found in base branch: main
Vulnerable Source Files (2)
/mm/slub.c
/mm/slub.c
Vulnerability Details
An issue was discovered in kmem_cache_alloc_bulk in mm/slub.c in the Linux kernel before 5.5.11. The slowpath lacks the required TID increment, aka CID-fd4d9c7d0c71.
Publish Date: 2020-11-28
URL: CVE-2020-29370
CVSS 3 Score Details (7.0 )
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
For more information on CVSS3 Scores, click here .
Suggested Fix
Type: Upgrade version
Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-29370
Release Date: 2020-11-28
Fix Resolution: v5.6-rc7,v5.5.11,v5.4.28
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve high detected in linuxlinux autoclosed cve high severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch main vulnerable source files mm slub c mm slub c vulnerability details an issue was discovered in kmem cache alloc bulk in mm slub c in the linux kernel before the slowpath lacks the required tid increment aka cid publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity high privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend ,0
792381,27958201069.0,IssuesEvent,2023-03-24 13:54:25,GoogleChrome/lighthouse,https://api.github.com/repos/GoogleChrome/lighthouse,closed,Firefox extension fails to generate report with 403 error,bug needs-more-info needs-priority,"### FAQ
- [X] Yes, my issue is not about [variability](https://github.com/GoogleChrome/lighthouse/blob/main/docs/variability.md) or [throttling](https://github.com/GoogleChrome/lighthouse/blob/main/docs/throttling.md).
- [X] Yes, my issue is not about a specific accessibility audit (file with [axe-core](https://github.com/dequelabs/axe-core) instead).
### URL
https://google.com/
### What happened?
Firefox: 111.0
Lighthouse-extension: 100.0.0.2
OS: Debian Bullseye (11.6)
All other extensions disabled.
Attempt to generate a report on any page fails with the following message in dev tools console.
```
code: 403, message: ""Requests from referer https://www.googleapis.com/ are blocked."", errors: (1) […], status: ""PERMISSION_DENIED"", details: (1) […]
[main-ac02a15b.js:1607:6288](https://googlechrome.github.io/lighthouse/viewer/src/main-ac02a15b.js)
Uncaught Error: Requests from referer https://www.googleapis.com/ are blocked.
error https://googlechrome.github.io/lighthouse/viewer/src/main-ac02a15b.js:17
```
### What did you expect?
Report generated.
### What have you tried?
* Disabled all other extensions.
* Disabled all security features.
### How were you running Lighthouse?
Other
### Lighthouse Version
100.0.0.2
### Chrome Version
_No response_
### Node Version
_No response_
### OS
Debian Bullseye (11.6)
### Relevant log output
```shell
Firefox: 111.0
Lighthouse-extension: 100.0.0.2
OS: Debian Bullseye (11.6)
All other extensions disabled.
Attempt to generate a report on any page fails with the following message in dev tools console.
code: 403, message: ""Requests from referer https://www.googleapis.com/ are blocked."", errors: (1) […], status: ""PERMISSION_DENIED"", details: (1) […]
[main-ac02a15b.js:1607:6288](https://googlechrome.github.io/lighthouse/viewer/src/main-ac02a15b.js)
Uncaught Error: Requests from referer https://www.googleapis.com/ are blocked.
error https://googlechrome.github.io/lighthouse/viewer/src/main-ac02a15b.js:17
```
",1.0,"Firefox extension fails to generate report with 403 error - ### FAQ
- [X] Yes, my issue is not about [variability](https://github.com/GoogleChrome/lighthouse/blob/main/docs/variability.md) or [throttling](https://github.com/GoogleChrome/lighthouse/blob/main/docs/throttling.md).
- [X] Yes, my issue is not about a specific accessibility audit (file with [axe-core](https://github.com/dequelabs/axe-core) instead).
### URL
https://google.com/
### What happened?
Firefox: 111.0
Lighthouse-extension: 100.0.0.2
OS: Debian Bullseye (11.6)
All other extensions disabled.
Attempt to generate a report on any page fails with the following message in dev tools console.
```
code: 403, message: ""Requests from referer https://www.googleapis.com/ are blocked."", errors: (1) […], status: ""PERMISSION_DENIED"", details: (1) […]
[main-ac02a15b.js:1607:6288](https://googlechrome.github.io/lighthouse/viewer/src/main-ac02a15b.js)
Uncaught Error: Requests from referer https://www.googleapis.com/ are blocked.
error https://googlechrome.github.io/lighthouse/viewer/src/main-ac02a15b.js:17
```
### What did you expect?
Report generated.
### What have you tried?
* Disabled all other extensions.
* Disabled all security features.
### How were you running Lighthouse?
Other
### Lighthouse Version
100.0.0.2
### Chrome Version
_No response_
### Node Version
_No response_
### OS
Debian Bullseye (11.6)
### Relevant log output
```shell
Firefox: 111.0
Lighthouse-extension: 100.0.0.2
OS: Debian Bullseye (11.6)
All other extensions disabled.
Attempt to generate a report on any page fails with the following message in dev tools console.
code: 403, message: ""Requests from referer https://www.googleapis.com/ are blocked."", errors: (1) […], status: ""PERMISSION_DENIED"", details: (1) […]
[main-ac02a15b.js:1607:6288](https://googlechrome.github.io/lighthouse/viewer/src/main-ac02a15b.js)
Uncaught Error: Requests from referer https://www.googleapis.com/ are blocked.
error https://googlechrome.github.io/lighthouse/viewer/src/main-ac02a15b.js:17
```
",0,firefox extension fails to generate report with error faq yes my issue is not about or yes my issue is not about a specific accessibility audit file with instead url what happened firefox lighthouse extension os debian bullseye all other extensions disabled attempt to generate a report on any page fails with the following message in dev tools console code message requests from referer are blocked errors status permission denied details uncaught error requests from referer are blocked error what did you expect report generated what have you tried disabled all other extensions disabled all security features how were you running lighthouse other lighthouse version chrome version no response node version no response os debian bullseye relevant log output shell firefox lighthouse extension os debian bullseye all other extensions disabled attempt to generate a report on any page fails with the following message in dev tools console code message requests from referer are blocked errors status permission denied details uncaught error requests from referer are blocked error ,0
266252,8364769652.0,IssuesEvent,2018-10-04 00:53:13,GreenInfo-Network/nyc-crash-mapper-chart-view,https://api.github.com/repos/GreenInfo-Network/nyc-crash-mapper-chart-view,closed,Rank: add Borough field to other geogs + display here,Priority,"The Intersections are special here, in that they have both a title and a subtitle: Borough & Intersection name.
They like this so much, that they want to apply this to the other geogs.
First order of business:
- [x] update datasets so there exists a *borough* field in these other tables
- [x] Community Board
- [x] NYPD Precinct
- [x] NTA Neighborhood
**It would be important** that each area fits into exactly 1 borough. If they cross a boundary so a place can have >1 borough, that could make for some really goofy display readouts and new UI considerations.
Then, the UI update to show the Borough + place name like Intersections do.
- [x] Community Board
- [x] NYPD Precinct
- [x] NTA Neighborhood
- [x] but not Borough, of course
",1.0,"Rank: add Borough field to other geogs + display here - The Intersections are special here, in that they have both a title and a subtitle: Borough & Intersection name.
They like this so much, that they want to apply this to the other geogs.
First order of business:
- [x] update datasets so there exists a *borough* field in these other tables
- [x] Community Board
- [x] NYPD Precinct
- [x] NTA Neighborhood
**It would be important** that each area fits into exactly 1 borough. If they cross a boundary so a place can have >1 borough, that could make for some really goofy display readouts and new UI considerations.
Then, the UI update to show the Borough + place name like Intersections do.
- [x] Community Board
- [x] NYPD Precinct
- [x] NTA Neighborhood
- [x] but not Borough, of course
",0,rank add borough field to other geogs display here the intersections are special here in that they have both a title and a subtitle borough intersection name they like this so much that they want to apply this to the other geogs first order of business update datasets so there exists a borough field in these other tables community board nypd precinct nta neighborhood it would be important that each area fits into exactly borough if they cross a boundary so a place can have borough that could make for some really goofy display readouts and new ui considerations then the ui update to show the borough place name like intersections do community board nypd precinct nta neighborhood but not borough of course ,0
5497,2941867710.0,IssuesEvent,2015-07-02 10:47:20,bem/bem-forum-content-ru,https://api.github.com/repos/bem/bem-forum-content-ru,closed,Ошибка в документации,asktheteam documentation js,"Вот здесь https://ru.bem.info/technology/i-bem/v2/i-bem-js/#%D0%9E%D0%BF%D0%B8%D1%81%D0%B0%D0%BD%D0%B8%D0%B5-%D0%B1%D0%BB%D0%BE%D0%BA%D0%B0-%D0%B2-%D0%B4%D0%B5%D0%BA%D0%BB%D0%B0%D1%80%D0%B0%D1%86%D0%B8%D0%B8
Вместо `modules.define('button', ['i-bem'], function(provide, Button) {`
Должно быть `modules.define('button', function(provide, Button) {`
Не нашел репозиторий bem.info, чтобы поправить самому. Почему нету на него ссылки нигде на сайте? Или какой-нибудь штуки типа ""нашли ошибку?"".",1.0,"Ошибка в документации - Вот здесь https://ru.bem.info/technology/i-bem/v2/i-bem-js/#%D0%9E%D0%BF%D0%B8%D1%81%D0%B0%D0%BD%D0%B8%D0%B5-%D0%B1%D0%BB%D0%BE%D0%BA%D0%B0-%D0%B2-%D0%B4%D0%B5%D0%BA%D0%BB%D0%B0%D1%80%D0%B0%D1%86%D0%B8%D0%B8
Вместо `modules.define('button', ['i-bem'], function(provide, Button) {`
Должно быть `modules.define('button', function(provide, Button) {`
Не нашел репозиторий bem.info, чтобы поправить самому. Почему нету на него ссылки нигде на сайте? Или какой-нибудь штуки типа ""нашли ошибку?"".",0,ошибка в документации вот здесь вместо modules define button function provide button должно быть modules define button function provide button не нашел репозиторий bem info чтобы поправить самому почему нету на него ссылки нигде на сайте или какой нибудь штуки типа нашли ошибку ,0
604,8127377845.0,IssuesEvent,2018-08-17 07:51:06,edenhill/librdkafka,https://api.github.com/repos/edenhill/librdkafka,closed,0.11.5 won't compile with libressl 2.7.4 (latest),portability,"Description
===========
`librdkafka` 0.11.5 doesn't compile using the latest `libressl` (2.7.4).
The commit adding support for the `ssl.sigalgs.list` config option (https://github.com/edenhill/librdkafka/commit/3cc0ab6070d7cda1e5f05d7160ff855945561e4f) checks the version of openssl (`#if OPENSSL_VERSION_NUMBER >= 0x1000200fL`) to determine if the `SSL_CTX_set1_sigalgs_list()` function is available.
Libressl (for some reason) defines the `OPENSSL_VERSION_NUMBER` as `0x20000000L`, but it doesn't (yet) support these newer APIs.
AlpineLinux builds packages using Libressl, which means the latest `librdkafka` doesn't compile on AlpineLinux. (See my attempt here: https://github.com/alpinelinux/aports/pull/4841 )
How to reproduce
================
See linked alpinelinux PR above. This Dockerfile will also reproduce (it succeeds if you change `0.11.5` to `0.11.4`):
```Dockerfile
FROM alpine:3.8
RUN apk add --no-cache build-base bash python2 libressl-dev zlib-dev
RUN mkdir -p /test
WORKDIR /test
ADD https://github.com/edenhill/librdkafka/archive/v0.11.5.tar.gz /test/
RUN tar xzf v0.11.5.tar.gz
WORKDIR /test/librdkafka-0.11.5
RUN ./configure --prefix=/usr
RUN make
```
Checklist
=========
Please provide the following information:
- [x] librdkafka version (release number or git tag): `0.11.5`
- [x] Apache Kafka version: `N/A`
- [x] librdkafka client configuration: `N/A`
- [x] Operating system: `Alpine Linux 3.8 & edge`
- [ ] Provide logs (with `debug=..` as necessary) from librdkafka
- [ ] Provide broker log excerpts
- [ ] Critical issue",True,"0.11.5 won't compile with libressl 2.7.4 (latest) - Description
===========
`librdkafka` 0.11.5 doesn't compile using the latest `libressl` (2.7.4).
The commit adding support for the `ssl.sigalgs.list` config option (https://github.com/edenhill/librdkafka/commit/3cc0ab6070d7cda1e5f05d7160ff855945561e4f) checks the version of openssl (`#if OPENSSL_VERSION_NUMBER >= 0x1000200fL`) to determine if the `SSL_CTX_set1_sigalgs_list()` function is available.
Libressl (for some reason) defines the `OPENSSL_VERSION_NUMBER` as `0x20000000L`, but it doesn't (yet) support these newer APIs.
AlpineLinux builds packages using Libressl, which means the latest `librdkafka` doesn't compile on AlpineLinux. (See my attempt here: https://github.com/alpinelinux/aports/pull/4841 )
How to reproduce
================
See linked alpinelinux PR above. This Dockerfile will also reproduce (it succeeds if you change `0.11.5` to `0.11.4`):
```Dockerfile
FROM alpine:3.8
RUN apk add --no-cache build-base bash python2 libressl-dev zlib-dev
RUN mkdir -p /test
WORKDIR /test
ADD https://github.com/edenhill/librdkafka/archive/v0.11.5.tar.gz /test/
RUN tar xzf v0.11.5.tar.gz
WORKDIR /test/librdkafka-0.11.5
RUN ./configure --prefix=/usr
RUN make
```
Checklist
=========
Please provide the following information:
- [x] librdkafka version (release number or git tag): `0.11.5`
- [x] Apache Kafka version: `N/A`
- [x] librdkafka client configuration: `N/A`
- [x] Operating system: `Alpine Linux 3.8 & edge`
- [ ] Provide logs (with `debug=..` as necessary) from librdkafka
- [ ] Provide broker log excerpts
- [ ] Critical issue",1, won t compile with libressl latest description librdkafka doesn t compile using the latest libressl the commit adding support for the ssl sigalgs list config option checks the version of openssl if openssl version number to determine if the ssl ctx sigalgs list function is available libressl for some reason defines the openssl version number as but it doesn t yet support these newer apis alpinelinux builds packages using libressl which means the latest librdkafka doesn t compile on alpinelinux see my attempt here how to reproduce see linked alpinelinux pr above this dockerfile will also reproduce it succeeds if you change to dockerfile from alpine run apk add no cache build base bash libressl dev zlib dev run mkdir p test workdir test add test run tar xzf tar gz workdir test librdkafka run configure prefix usr run make checklist please provide the following information librdkafka version release number or git tag apache kafka version n a librdkafka client configuration n a operating system alpine linux edge provide logs with debug as necessary from librdkafka provide broker log excerpts critical issue,1
76,3000853045.0,IssuesEvent,2015-07-24 06:46:39,svaarala/duktape,https://api.github.com/repos/svaarala/duktape,opened,Remove support for DUK_OPT_xxx flags,portability,Remove support for `DUK_OPT_xxx` from `duk_config.h` and `genconfig`. Update documentation to match.,True,Remove support for DUK_OPT_xxx flags - Remove support for `DUK_OPT_xxx` from `duk_config.h` and `genconfig`. Update documentation to match.,1,remove support for duk opt xxx flags remove support for duk opt xxx from duk config h and genconfig update documentation to match ,1
9849,3075118580.0,IssuesEvent,2015-08-20 11:47:25,YaccConstructor/YaccConstructor,https://api.github.com/repos/YaccConstructor/YaccConstructor,opened,Improve R# plugin configuration,CourseWork2 R# SimpleTestTask task,"Use standard R# configuration dialog to configure
- [ ] code highlighting
- [ ] hotspots
instead of manual XML fixing.",1.0,"Improve R# plugin configuration - Use standard R# configuration dialog to configure
- [ ] code highlighting
- [ ] hotspots
instead of manual XML fixing.",0,improve r plugin configuration use standard r configuration dialog to configure code highlighting hotspots instead of manual xml fixing ,0
1811,26775173157.0,IssuesEvent,2023-01-31 16:38:32,alcionai/corso,https://api.github.com/repos/alcionai/corso,closed,GC: SharePoint: Minimal-SitePage Data Type ,supportability,"Create a minimal version of `SharePoint.Pages` into the repository until data type is introduced into v1.0 of `msgraph`.
Data type needs to support the [Parsable](https://pkg.go.dev/github.com/microsoft/kiota-abstractions-go@v0.16.0/serialization#Parsable) interface.",True,"GC: SharePoint: Minimal-SitePage Data Type - Create a minimal version of `SharePoint.Pages` into the repository until data type is introduced into v1.0 of `msgraph`.
Data type needs to support the [Parsable](https://pkg.go.dev/github.com/microsoft/kiota-abstractions-go@v0.16.0/serialization#Parsable) interface.",1,gc sharepoint minimal sitepage data type create a minimal version of sharepoint pages into the repository until data type is introduced into of msgraph data type needs to support the interface ,1
1922,30250615051.0,IssuesEvent,2023-07-06 20:13:31,golang/vulndb,https://api.github.com/repos/golang/vulndb,closed,x/vulndb: potential Go vuln in github.com/labring/sealos: GHSA-vpxf-q44g-w34w,excluded: NOT_IMPORTABLE,"In GitHub Security Advisory [GHSA-vpxf-q44g-w34w](https://github.com/advisories/GHSA-vpxf-q44g-w34w), there is a vulnerability in the following Go packages or modules:
| Unit | Fixed | Vulnerable Ranges |
| - | - | - |
| [github.com/labring/sealos](https://pkg.go.dev/github.com/labring/sealos) | | <= 4.2.0 |
Cross references:
No existing reports found with this module or alias.
See [doc/triage.md](https://github.com/golang/vulndb/blob/master/doc/triage.md) for instructions on how to triage this report.
```
modules:
- module: github.com/labring/sealos
versions:
- {}
vulnerable_at: 1.14.0
packages:
- package: github.com/labring/sealos
summary: Sealos billing system permission control defect
description: |-
### Summary
There is a permission flaw in the Sealos billing system, which allows users to
control the recharge resource account. sealos. io/v1/Payment, resulting in the
ability to recharge any amount of 1 RMB.
### Details
The reason is that sealos is in arrears. Egg pain, we can't create a terminal
anymore. Let's charge for it:
Then it was discovered that the charging interface had returned all resource
information. Unfortunately, based on previous vulnerability experience, the
namespace of this custom resource is still under the current user's control and
may have permission to correct it.
### PoC disable by publish
### Impact
+ sealos public cloud user
+ CWE-287 Improper Authentication
ghsas:
- GHSA-vpxf-q44g-w34w
references:
- advisory: https://github.com/labring/sealos/security/advisories/GHSA-vpxf-q44g-w34w
- advisory: https://github.com/advisories/GHSA-vpxf-q44g-w34w
```",True,"x/vulndb: potential Go vuln in github.com/labring/sealos: GHSA-vpxf-q44g-w34w - In GitHub Security Advisory [GHSA-vpxf-q44g-w34w](https://github.com/advisories/GHSA-vpxf-q44g-w34w), there is a vulnerability in the following Go packages or modules:
| Unit | Fixed | Vulnerable Ranges |
| - | - | - |
| [github.com/labring/sealos](https://pkg.go.dev/github.com/labring/sealos) | | <= 4.2.0 |
Cross references:
No existing reports found with this module or alias.
See [doc/triage.md](https://github.com/golang/vulndb/blob/master/doc/triage.md) for instructions on how to triage this report.
```
modules:
- module: github.com/labring/sealos
versions:
- {}
vulnerable_at: 1.14.0
packages:
- package: github.com/labring/sealos
summary: Sealos billing system permission control defect
description: |-
### Summary
There is a permission flaw in the Sealos billing system, which allows users to
control the recharge resource account. sealos. io/v1/Payment, resulting in the
ability to recharge any amount of 1 RMB.
### Details
The reason is that sealos is in arrears. Egg pain, we can't create a terminal
anymore. Let's charge for it:
Then it was discovered that the charging interface had returned all resource
information. Unfortunately, based on previous vulnerability experience, the
namespace of this custom resource is still under the current user's control and
may have permission to correct it.
### PoC disable by publish
### Impact
+ sealos public cloud user
+ CWE-287 Improper Authentication
ghsas:
- GHSA-vpxf-q44g-w34w
references:
- advisory: https://github.com/labring/sealos/security/advisories/GHSA-vpxf-q44g-w34w
- advisory: https://github.com/advisories/GHSA-vpxf-q44g-w34w
```",1,x vulndb potential go vuln in github com labring sealos ghsa vpxf in github security advisory there is a vulnerability in the following go packages or modules unit fixed vulnerable ranges cross references no existing reports found with this module or alias see for instructions on how to triage this report modules module github com labring sealos versions vulnerable at packages package github com labring sealos summary sealos billing system permission control defect description summary there is a permission flaw in the sealos billing system which allows users to control the recharge resource account sealos io payment resulting in the ability to recharge any amount of rmb details the reason is that sealos is in arrears egg pain we can t create a terminal anymore let s charge for it then it was discovered that the charging interface had returned all resource information unfortunately based on previous vulnerability experience the namespace of this custom resource is still under the current user s control and may have permission to correct it poc disable by publish impact sealos public cloud user cwe improper authentication ghsas ghsa vpxf references advisory advisory ,1
173752,14436483356.0,IssuesEvent,2020-12-07 10:11:38,cemac/SWIFT-Testbed3,https://api.github.com/repos/cemac/SWIFT-Testbed3,opened,Documentation ,documentation,"Document, license and add DOI
1. READMEs for each section
2. Wiki
3. Userguides for tools
4. Readme overview",1.0,"Documentation - Document, license and add DOI
1. READMEs for each section
2. Wiki
3. Userguides for tools
4. Readme overview",0,documentation document license and add doi readmes for each section wiki userguides for tools readme overview,0
391128,26881209315.0,IssuesEvent,2023-02-05 17:03:48,Seneca-CDOT/starchart,https://api.github.com/repos/Seneca-CDOT/starchart,closed,Prisma introduction in wiki,documentation,"Currently, there's no introduction for Prisma in our wiki to serve as a starting point to looking into the tool.
While the official site has [good documentation](https://www.prisma.io/docs/concepts/overview/what-is-prisma), we might still want to add a helpful entry for navigating the this according to how we want to use it.
Some points to touch on:
- How to set it up
- How to connect to a MySQL database
- How to do CRUD operations",1.0,"Prisma introduction in wiki - Currently, there's no introduction for Prisma in our wiki to serve as a starting point to looking into the tool.
While the official site has [good documentation](https://www.prisma.io/docs/concepts/overview/what-is-prisma), we might still want to add a helpful entry for navigating the this according to how we want to use it.
Some points to touch on:
- How to set it up
- How to connect to a MySQL database
- How to do CRUD operations",0,prisma introduction in wiki currently there s no introduction for prisma in our wiki to serve as a starting point to looking into the tool while the official site has we might still want to add a helpful entry for navigating the this according to how we want to use it some points to touch on how to set it up how to connect to a mysql database how to do crud operations,0
485,6979688036.0,IssuesEvent,2017-12-12 21:58:14,dotnet/roslyn,https://api.github.com/repos/dotnet/roslyn,closed,Inconsistent compiler behaviour between OS X and Windows if keyFile is the empty string,Area-Compilers Bug Concept-CoreCLR Concept-Portability,"**Version Used**:
_OS X_:
``` sh
computer:api user$ dotnet --info
.NET Command Line Tools (1.0.0-preview1-002702)
Product Information:
Version: 1.0.0-preview1-002702
Commit Sha: 6cde21225e
Runtime Environment:
OS Name: Mac OS X
OS Version: 10.11
OS Platform: Darwin
RID: osx.10.11-x64
```
_Windows_:
``` cmd
PS C:\Coding\api> dotnet --info
.NET Command Line Tools (1.0.0-preview1-002702)
Product Information:
Version: 1.0.0-preview1-002702
Commit Sha: 6cde21225e
Runtime Environment:
OS Name: Windows
OS Version: 10.0.10240
OS Platform: Windows
RID: win10-x64
```
**Steps to Reproduce**:
1. Clone [https://github.com/martincostello/api.git](https://github.com/martincostello/api.git) from commit [c7d2e8a448470c07d98160bc19cbc2786cb28fc6](https://github.com/martincostello/api/commit/c7d2e8a448470c07d98160bc19cbc2786cb28fc6).
2. Run `dotnet build src/API` from the root of the repository.
**Expected Behaviour**:
OS X and Windows build attempts exhibit the same behaviour, which is either to:
1. Treat `""""` as `null` on both platforms and compile successfully with no strong name, or;
2. Treat `""""` as an invalid key file and error with `error CS7088: Invalid 'CryptoKeyFile' value: ''.` on both platforms. I'm tending towards this being the preferrable behaviour as in my case I didn't have a key file at all, so I resolved the error by using `null` explicitly in [this commit](https://github.com/martincostello/api/commit/0dc2557083103f2ac500ae871a9a901cb0c8a293).
**Actual Behaviour**:
_OS X_:
``` sh
computer:api user$ dotnet build src/API
Project API (.NETCoreApp,Version=v1.0) will be compiled because expected inputs are missing
Compiling API for .NETCoreApp,Version=v1.0
/usr/local/share/dotnet/dotnet compile-csc @/Users/user/Coding/api/src/API/obj/Debug/netcoreapp1.0/dotnet-compile.rsp returned Exit Code 1
/Users/user/Coding/api/src/API/error CS7088: Invalid 'CryptoKeyFile' value: ''.
Compilation failed.
0 Warning(s)
1 Error(s)
Time elapsed 00:00:01.5841885
```
_Windows_:
``` cmd
PS C:\Coding\api> dotnet build .\src\API\
Project API (.NETCoreApp,Version=v1.0) will be compiled because expected inputs are missing
Compiling API for .NETCoreApp,Version=v1.0
Compilation succeeded.
0 Warning(s)
0 Error(s)
Time elapsed 00:00:02.5016735
```
",True,"Inconsistent compiler behaviour between OS X and Windows if keyFile is the empty string - **Version Used**:
_OS X_:
``` sh
computer:api user$ dotnet --info
.NET Command Line Tools (1.0.0-preview1-002702)
Product Information:
Version: 1.0.0-preview1-002702
Commit Sha: 6cde21225e
Runtime Environment:
OS Name: Mac OS X
OS Version: 10.11
OS Platform: Darwin
RID: osx.10.11-x64
```
_Windows_:
``` cmd
PS C:\Coding\api> dotnet --info
.NET Command Line Tools (1.0.0-preview1-002702)
Product Information:
Version: 1.0.0-preview1-002702
Commit Sha: 6cde21225e
Runtime Environment:
OS Name: Windows
OS Version: 10.0.10240
OS Platform: Windows
RID: win10-x64
```
**Steps to Reproduce**:
1. Clone [https://github.com/martincostello/api.git](https://github.com/martincostello/api.git) from commit [c7d2e8a448470c07d98160bc19cbc2786cb28fc6](https://github.com/martincostello/api/commit/c7d2e8a448470c07d98160bc19cbc2786cb28fc6).
2. Run `dotnet build src/API` from the root of the repository.
**Expected Behaviour**:
OS X and Windows build attempts exhibit the same behaviour, which is either to:
1. Treat `""""` as `null` on both platforms and compile successfully with no strong name, or;
2. Treat `""""` as an invalid key file and error with `error CS7088: Invalid 'CryptoKeyFile' value: ''.` on both platforms. I'm tending towards this being the preferrable behaviour as in my case I didn't have a key file at all, so I resolved the error by using `null` explicitly in [this commit](https://github.com/martincostello/api/commit/0dc2557083103f2ac500ae871a9a901cb0c8a293).
**Actual Behaviour**:
_OS X_:
``` sh
computer:api user$ dotnet build src/API
Project API (.NETCoreApp,Version=v1.0) will be compiled because expected inputs are missing
Compiling API for .NETCoreApp,Version=v1.0
/usr/local/share/dotnet/dotnet compile-csc @/Users/user/Coding/api/src/API/obj/Debug/netcoreapp1.0/dotnet-compile.rsp returned Exit Code 1
/Users/user/Coding/api/src/API/error CS7088: Invalid 'CryptoKeyFile' value: ''.
Compilation failed.
0 Warning(s)
1 Error(s)
Time elapsed 00:00:01.5841885
```
_Windows_:
``` cmd
PS C:\Coding\api> dotnet build .\src\API\
Project API (.NETCoreApp,Version=v1.0) will be compiled because expected inputs are missing
Compiling API for .NETCoreApp,Version=v1.0
Compilation succeeded.
0 Warning(s)
0 Error(s)
Time elapsed 00:00:02.5016735
```
",1,inconsistent compiler behaviour between os x and windows if keyfile is the empty string version used os x sh computer api user dotnet info net command line tools product information version commit sha runtime environment os name mac os x os version os platform darwin rid osx windows cmd ps c coding api dotnet info net command line tools product information version commit sha runtime environment os name windows os version os platform windows rid steps to reproduce clone from commit run dotnet build src api from the root of the repository expected behaviour os x and windows build attempts exhibit the same behaviour which is either to treat as null on both platforms and compile successfully with no strong name or treat as an invalid key file and error with error invalid cryptokeyfile value on both platforms i m tending towards this being the preferrable behaviour as in my case i didn t have a key file at all so i resolved the error by using null explicitly in actual behaviour os x sh computer api user dotnet build src api project api netcoreapp version will be compiled because expected inputs are missing compiling api for netcoreapp version usr local share dotnet dotnet compile csc users user coding api src api obj debug dotnet compile rsp returned exit code users user coding api src api error invalid cryptokeyfile value compilation failed warning s error s time elapsed windows cmd ps c coding api dotnet build src api project api netcoreapp version will be compiled because expected inputs are missing compiling api for netcoreapp version compilation succeeded warning s error s time elapsed ,1
1202,15512260506.0,IssuesEvent,2021-03-12 01:20:22,verilator/verilator,https://api.github.com/repos/verilator/verilator,closed,Unsupported: Compile multi-threads's model with msvc on windows 10,area: portability resolution: abandoned,"> Verilator Version:
```
Verilator 4.100 2020-09-07 rev v4.040-74-g16fba5948
```
> OS and Compiler:
```
Windows 10 and Visual Studio 2017
```
> Verilator Command:
```
verilator -Wall --sc --trace --threads 2 --compiler msvc example.v
```
> Add all generated source files to the VS project,then compile the project. Unfortunately, the compiler reports the following error:
```
verilated_threads.h(166): error C3861: 'GetCurrentProcessorNumber': identifier not found
```
> I think that this error is caused by the missing header file, then I try the way this link([getprocessidofthread-identifier-not-found](https://stackoverflow.com/questions/30029140/getprocessidofthread-identifier-not-found)) provides. the compiler still reports the same error. I hope you can help me solve this problem, thanks.
> _**Notice:**_ compile and run single-thread model normally.",True,"Unsupported: Compile multi-threads's model with msvc on windows 10 - > Verilator Version:
```
Verilator 4.100 2020-09-07 rev v4.040-74-g16fba5948
```
> OS and Compiler:
```
Windows 10 and Visual Studio 2017
```
> Verilator Command:
```
verilator -Wall --sc --trace --threads 2 --compiler msvc example.v
```
> Add all generated source files to the VS project,then compile the project. Unfortunately, the compiler reports the following error:
```
verilated_threads.h(166): error C3861: 'GetCurrentProcessorNumber': identifier not found
```
> I think that this error is caused by the missing header file, then I try the way this link([getprocessidofthread-identifier-not-found](https://stackoverflow.com/questions/30029140/getprocessidofthread-identifier-not-found)) provides. the compiler still reports the same error. I hope you can help me solve this problem, thanks.
> _**Notice:**_ compile and run single-thread model normally.",1,unsupported compile multi threads s model with msvc on windows verilator version verilator rev os and compiler windows and visual studio verilator command verilator wall sc trace threads compiler msvc example v add all generated source files to the vs project,then compile the project unfortunately the compiler reports the following error verilated threads h error getcurrentprocessornumber identifier not found i think that this error is caused by the missing header file then i try the way this link provides the compiler still reports the same error i hope you can help me solve this problem thanks notice compile and run single thread model normally ,1
56,2870090696.0,IssuesEvent,2015-06-06 20:33:39,magnumripper/JohnTheRipper,https://api.github.com/repos/magnumripper/JohnTheRipper,closed,MinGW cross-compile is currently failing,bug enhancement portability,"```
$ cat /etc/fedora-release
Fedora release 21 (Twenty One)
$ ./configure --host=x86_64-w64-mingw32
$ make
...
luks2john.c: In function 'hash_plugin_parse_hash':
luks2john.c:167:3: warning: unknown conversion type character 'z' in format [-Wformat=]
printf(""$luks$1$%zu$"", sizeof(myphdr));
^
luks2john.c:191:3: warning: unknown conversion type character 'z' in format [-Wformat=]
printf(""$luks$0$%zu$"", sizeof(myphdr));
^
luks2john.o: In function `hash_plugin_parse_hash':
/JohnTheRipper/src/luks2john.c:171: undefined reference to `BIO_f_base64'
/JohnTheRipper/src/luks2john.c:171: undefined reference to `BIO_new'
/JohnTheRipper/src/luks2john.c:172: undefined reference to `BIO_new_fp'
/JohnTheRipper/src/luks2john.c:173: undefined reference to `BIO_push'
/JohnTheRipper/src/luks2john.c:174: undefined reference to `BIO_set_flags'
/JohnTheRipper/src/luks2john.c:175: undefined reference to `BIO_write'
/JohnTheRipper/src/luks2john.c:176: undefined reference to `BIO_ctrl'
/JohnTheRipper/src/luks2john.c:181: undefined reference to `BIO_free_all'
jumbo.o: In function `setenv':
/JohnTheRipper/src/jumbo.c:331: undefined reference to `mem_alloc_tiny_func'
collect2: error: ld returned 1 exit status
make[1]: *** [../run/luks2john.exe] Error 1
```",True,"MinGW cross-compile is currently failing - ```
$ cat /etc/fedora-release
Fedora release 21 (Twenty One)
$ ./configure --host=x86_64-w64-mingw32
$ make
...
luks2john.c: In function 'hash_plugin_parse_hash':
luks2john.c:167:3: warning: unknown conversion type character 'z' in format [-Wformat=]
printf(""$luks$1$%zu$"", sizeof(myphdr));
^
luks2john.c:191:3: warning: unknown conversion type character 'z' in format [-Wformat=]
printf(""$luks$0$%zu$"", sizeof(myphdr));
^
luks2john.o: In function `hash_plugin_parse_hash':
/JohnTheRipper/src/luks2john.c:171: undefined reference to `BIO_f_base64'
/JohnTheRipper/src/luks2john.c:171: undefined reference to `BIO_new'
/JohnTheRipper/src/luks2john.c:172: undefined reference to `BIO_new_fp'
/JohnTheRipper/src/luks2john.c:173: undefined reference to `BIO_push'
/JohnTheRipper/src/luks2john.c:174: undefined reference to `BIO_set_flags'
/JohnTheRipper/src/luks2john.c:175: undefined reference to `BIO_write'
/JohnTheRipper/src/luks2john.c:176: undefined reference to `BIO_ctrl'
/JohnTheRipper/src/luks2john.c:181: undefined reference to `BIO_free_all'
jumbo.o: In function `setenv':
/JohnTheRipper/src/jumbo.c:331: undefined reference to `mem_alloc_tiny_func'
collect2: error: ld returned 1 exit status
make[1]: *** [../run/luks2john.exe] Error 1
```",1,mingw cross compile is currently failing cat etc fedora release fedora release twenty one configure host make c in function hash plugin parse hash c warning unknown conversion type character z in format printf luks zu sizeof myphdr c warning unknown conversion type character z in format printf luks zu sizeof myphdr o in function hash plugin parse hash johntheripper src c undefined reference to bio f johntheripper src c undefined reference to bio new johntheripper src c undefined reference to bio new fp johntheripper src c undefined reference to bio push johntheripper src c undefined reference to bio set flags johntheripper src c undefined reference to bio write johntheripper src c undefined reference to bio ctrl johntheripper src c undefined reference to bio free all jumbo o in function setenv johntheripper src jumbo c undefined reference to mem alloc tiny func error ld returned exit status make error ,1
122,3392863414.0,IssuesEvent,2015-11-30 21:28:28,dotnet/roslyn,https://api.github.com/repos/dotnet/roslyn,closed,Roslyn cannot delay sign with ECMA key on Linux,Area-Compilers Concept-Portability Feature Request,"In https://github.com/dotnet/corefx/pull/1601 I had to work around this by putting a different key on an assembly temporarily. The error you get is the following:
```
CSC : error CS7027: Error signing output with public key from file '/home/jeremy/repos/corefx/packages/Microsoft.DotNet.BuildTools.1.0.25-prerelease-00040/lib/ECMA.snk' -- mscoree.dll [/home/jeremy/repos/corefx/src/System.IO.Compression/src/System.IO.Compression.csproj]
CSC : warning CS7033: Delay signing was specified and requires a public key, but no public key was specified [/home/jeremy/repos/corefx/src/System.IO.Compression/src/System.IO.Compression.csproj]
```
The project in CoreFX that uses this key currently is `corefx/src/System.IO.Compression/src/System.IO.Compression.csproj`. My repro was on CoreFx master with the xplat branch of MSBuild on Mono 3.12.1.
```
mono ~/repos/msbuild/bin/Unix/Debug-MONO/MSBuild.exe -t:Build -fl ""-flp:LogFile=msbuild-corefx.log;V=diag;"" -p:Configuration=Linux_Debug -p:UseRoslynCompiler=true ~/repos/corefx/src/System.IO.Compression/src/System.IO.Compression.csproj
```",True,"Roslyn cannot delay sign with ECMA key on Linux - In https://github.com/dotnet/corefx/pull/1601 I had to work around this by putting a different key on an assembly temporarily. The error you get is the following:
```
CSC : error CS7027: Error signing output with public key from file '/home/jeremy/repos/corefx/packages/Microsoft.DotNet.BuildTools.1.0.25-prerelease-00040/lib/ECMA.snk' -- mscoree.dll [/home/jeremy/repos/corefx/src/System.IO.Compression/src/System.IO.Compression.csproj]
CSC : warning CS7033: Delay signing was specified and requires a public key, but no public key was specified [/home/jeremy/repos/corefx/src/System.IO.Compression/src/System.IO.Compression.csproj]
```
The project in CoreFX that uses this key currently is `corefx/src/System.IO.Compression/src/System.IO.Compression.csproj`. My repro was on CoreFx master with the xplat branch of MSBuild on Mono 3.12.1.
```
mono ~/repos/msbuild/bin/Unix/Debug-MONO/MSBuild.exe -t:Build -fl ""-flp:LogFile=msbuild-corefx.log;V=diag;"" -p:Configuration=Linux_Debug -p:UseRoslynCompiler=true ~/repos/corefx/src/System.IO.Compression/src/System.IO.Compression.csproj
```",1,roslyn cannot delay sign with ecma key on linux in i had to work around this by putting a different key on an assembly temporarily the error you get is the following csc error error signing output with public key from file home jeremy repos corefx packages microsoft dotnet buildtools prerelease lib ecma snk mscoree dll csc warning delay signing was specified and requires a public key but no public key was specified the project in corefx that uses this key currently is corefx src system io compression src system io compression csproj my repro was on corefx master with the xplat branch of msbuild on mono mono repos msbuild bin unix debug mono msbuild exe t build fl flp logfile msbuild corefx log v diag p configuration linux debug p useroslyncompiler true repos corefx src system io compression src system io compression csproj ,1
202334,15281127698.0,IssuesEvent,2021-02-23 07:36:39,f2etw/jobs,https://api.github.com/repos/f2etw/jobs,closed,[徵才] VerdantSparks | Full-Stack Software Engineer,Work Remotely [F] Vue.js unit test,"We are a software startup company located in Taipei with Hong Konger founder. We treasure professional talents who with passion. We focus on your proven skills more than academic results. If you are passionate in technology, enjoy non-traditional working environment and want to improve animal welfare, **this job is for you**.
### Tasks:
- Full-stack Software development.
- Participate in UX design.
- On-site support in exhibition booth may required.
- Meeting clients.
- Remote working.
### Education background and work experience required:
- Degree of Computer Science or equivalent qualification from the world's top 200 universities and working experiences from Fortune 500 companies.
- **Fluent English is a must.**
- You have at least one mobile App under your name deployed in production Apple App Store/Google Play Store.
### Technical requirement:
- VueJS, JavaScript, TypeScript, NativeScript, C#, .NETCore, Python.
- MongoDb, MS-SQL Server, Firebase and other SQL/NoSQL vendors.
- GitHub, Docker, webpack, npm, yarn, DevOps, Cloud Computing.
- Slack, WebAPI, Cybersecurity concepts.
- Implementation experience of Blockchain and Machine Learning.
- You have good understanding and passionate in writing clean and performant code. That means:
- You write tests.
- You know OOP and design patterns well.
- Your code don't smell.
- You write documentations about your code.
- You write code with plan.
### Personal characteristics required:
- You able to work under pressure and tight schedule.
- You will proactively contribute to the company.
- You demonstrates phenomenal problem solving skills.
- You are honest with the company, the clients, and yourself.
- You take responsibility for your own mistakes.
- You work as a good team player.
- You love technology and programming.
- **You own and love your pets and care about animals welfare.**
### Extras
+ Cantonese language skill will be a big plus.
+ Contributions in open-source world is appreciated.
### Remuneration package
- Annual salary: 1M - 1.5M+ NTD (Labor Insurance & National Health Insurance included).
- National Health Insurance, Labor Insurance and Labor Pension as required from Taiwan government.
- Working hours and annual leaves: Follow [Labor Standards Act](https://law.moj.gov.tw/ENG/LawClass/LawAll.aspx?pcode=N0030001)
### Apply information
1. Send your CV/resume with title ""Apply for Full-stack Software Engineer"" to [here](mailto:apply@verdantsparks.io).
2. Your should include your Linkedin/StackOverflow profile url in your resume.
3. You **MUST** have portfolio for referencing your previous work.
- GitHub/GitLab or equivalent public repository of your previous work/personal projects/contributions to open-source community.
- **We discard application without portfolio directly.**
4. Shortlisted candidate will be invited for coding test.
5. There will be a casual face to face interview If you pass the coding test.
6. Although we located in Taipei, you can work in anywhere. We will happy if you live around Taichung area.
### Please refer to [here](https://verdantsparks.io) for our company information.
",1.0,"[徵才] VerdantSparks | Full-Stack Software Engineer - We are a software startup company located in Taipei with Hong Konger founder. We treasure professional talents who with passion. We focus on your proven skills more than academic results. If you are passionate in technology, enjoy non-traditional working environment and want to improve animal welfare, **this job is for you**.
### Tasks:
- Full-stack Software development.
- Participate in UX design.
- On-site support in exhibition booth may required.
- Meeting clients.
- Remote working.
### Education background and work experience required:
- Degree of Computer Science or equivalent qualification from the world's top 200 universities and working experiences from Fortune 500 companies.
- **Fluent English is a must.**
- You have at least one mobile App under your name deployed in production Apple App Store/Google Play Store.
### Technical requirement:
- VueJS, JavaScript, TypeScript, NativeScript, C#, .NETCore, Python.
- MongoDb, MS-SQL Server, Firebase and other SQL/NoSQL vendors.
- GitHub, Docker, webpack, npm, yarn, DevOps, Cloud Computing.
- Slack, WebAPI, Cybersecurity concepts.
- Implementation experience of Blockchain and Machine Learning.
- You have good understanding and passionate in writing clean and performant code. That means:
- You write tests.
- You know OOP and design patterns well.
- Your code don't smell.
- You write documentations about your code.
- You write code with plan.
### Personal characteristics required:
- You able to work under pressure and tight schedule.
- You will proactively contribute to the company.
- You demonstrates phenomenal problem solving skills.
- You are honest with the company, the clients, and yourself.
- You take responsibility for your own mistakes.
- You work as a good team player.
- You love technology and programming.
- **You own and love your pets and care about animals welfare.**
### Extras
+ Cantonese language skill will be a big plus.
+ Contributions in open-source world is appreciated.
### Remuneration package
- Annual salary: 1M - 1.5M+ NTD (Labor Insurance & National Health Insurance included).
- National Health Insurance, Labor Insurance and Labor Pension as required from Taiwan government.
- Working hours and annual leaves: Follow [Labor Standards Act](https://law.moj.gov.tw/ENG/LawClass/LawAll.aspx?pcode=N0030001)
### Apply information
1. Send your CV/resume with title ""Apply for Full-stack Software Engineer"" to [here](mailto:apply@verdantsparks.io).
2. Your should include your Linkedin/StackOverflow profile url in your resume.
3. You **MUST** have portfolio for referencing your previous work.
- GitHub/GitLab or equivalent public repository of your previous work/personal projects/contributions to open-source community.
- **We discard application without portfolio directly.**
4. Shortlisted candidate will be invited for coding test.
5. There will be a casual face to face interview If you pass the coding test.
6. Although we located in Taipei, you can work in anywhere. We will happy if you live around Taichung area.
### Please refer to [here](https://verdantsparks.io) for our company information.
",0, verdantsparks full stack software engineer we are a software startup company located in taipei with hong konger founder we treasure professional talents who with passion we focus on your proven skills more than academic results if you are passionate in technology enjoy non traditional working environment and want to improve animal welfare this job is for you tasks full stack software development participate in ux design on site support in exhibition booth may required meeting clients remote working education background and work experience required degree of computer science or equivalent qualification from the world s top universities and working experiences from fortune companies fluent english is a must you have at least one mobile app under your name deployed in production apple app store google play store technical requirement vuejs javascript typescript nativescript c netcore python mongodb ms sql server firebase and other sql nosql vendors github docker webpack npm yarn devops cloud computing slack webapi cybersecurity concepts implementation experience of blockchain and machine learning you have good understanding and passionate in writing clean and performant code that means you write tests you know oop and design patterns well your code don t smell you write documentations about your code you write code with plan personal characteristics required you able to work under pressure and tight schedule you will proactively contribute to the company you demonstrates phenomenal problem solving skills you are honest with the company the clients and yourself you take responsibility for your own mistakes you work as a good team player you love technology and programming you own and love your pets and care about animals welfare extras cantonese language skill will be a big plus contributions in open source world is appreciated remuneration package annual salary ntd labor insurance national health insurance included national health insurance labor insurance and labor pension as required from taiwan government working hours and annual leaves follow apply information send your cv resume with title apply for full stack software engineer to mailto apply verdantsparks io your should include your linkedin stackoverflow profile url in your resume you must have portfolio for referencing your previous work github gitlab or equivalent public repository of your previous work personal projects contributions to open source community we discard application without portfolio directly shortlisted candidate will be invited for coding test there will be a casual face to face interview if you pass the coding test although we located in taipei you can work in anywhere we will happy if you live around taichung area please refer to for our company information ,0
1718,25074634944.0,IssuesEvent,2022-11-07 14:40:36,elastic/elasticsearch,https://api.github.com/repos/elastic/elasticsearch,opened,"Expose recovery, snapshot, and restore rate limits and throttle times in node stats",>enhancement :Distributed/Snapshot/Restore :Distributed/Recovery Team:Distributed needs:triage Supportability,"### Description
The rate limits for recoveries and snapshots can be indirectly computed from Elasticsearch configurations, which makes it hard to ascertain their real values used during runtime because:
- https://github.com/elastic/elasticsearch/pull/82819 introduced a way to indirectly influence the recovery speed `indices.recovery.max_bytes_per_sec` by configuring three key bandwidth metrics settings.
- Snapshot recovery rate limit `max_restore_bytes_per_sec` is already capped by the recovery rate limit.
- https://github.com/elastic/elasticsearch/issues/57023 aims to tie and potentially cap the snapshot speed `max_snapshot_bytes_per_sec` to the recovery limit (when the node bandwidth metrics settings are configured) apart from the existing snapshot speed configuration.
- Furthermore, some of the aforementioned limits can be configured per node and/or repository and/or at runtime.
To make observability of these rate limits easier, the proposal is to expose the final used rate limit values (considering also any cap, e.g., by the recovery rate limit) in the node stats. Snapshot rate limits will be reported per repository.
Apart from the speed, the proposal is to also expose the throttling times more prominently. Specifically:
- [The repos analysis API](https://www.elastic.co/guide/en/elasticsearch/reference/master/repo-analysis-api.html) exposes the snapshot and snapshot restore throttle times. But this is not useful for live usage. We could expose the throttle times in node stats per repository.
- Recovery throttling is already exposed in [nodes stats](https://www.elastic.co/guide/en/elasticsearch/reference/master/cluster-nodes-stats.html) under node_id > indices > recovery > throttle_time. I think no change is needed for recovery throttling stats.
",True,"Expose recovery, snapshot, and restore rate limits and throttle times in node stats - ### Description
The rate limits for recoveries and snapshots can be indirectly computed from Elasticsearch configurations, which makes it hard to ascertain their real values used during runtime because:
- https://github.com/elastic/elasticsearch/pull/82819 introduced a way to indirectly influence the recovery speed `indices.recovery.max_bytes_per_sec` by configuring three key bandwidth metrics settings.
- Snapshot recovery rate limit `max_restore_bytes_per_sec` is already capped by the recovery rate limit.
- https://github.com/elastic/elasticsearch/issues/57023 aims to tie and potentially cap the snapshot speed `max_snapshot_bytes_per_sec` to the recovery limit (when the node bandwidth metrics settings are configured) apart from the existing snapshot speed configuration.
- Furthermore, some of the aforementioned limits can be configured per node and/or repository and/or at runtime.
To make observability of these rate limits easier, the proposal is to expose the final used rate limit values (considering also any cap, e.g., by the recovery rate limit) in the node stats. Snapshot rate limits will be reported per repository.
Apart from the speed, the proposal is to also expose the throttling times more prominently. Specifically:
- [The repos analysis API](https://www.elastic.co/guide/en/elasticsearch/reference/master/repo-analysis-api.html) exposes the snapshot and snapshot restore throttle times. But this is not useful for live usage. We could expose the throttle times in node stats per repository.
- Recovery throttling is already exposed in [nodes stats](https://www.elastic.co/guide/en/elasticsearch/reference/master/cluster-nodes-stats.html) under node_id > indices > recovery > throttle_time. I think no change is needed for recovery throttling stats.
",1,expose recovery snapshot and restore rate limits and throttle times in node stats description the rate limits for recoveries and snapshots can be indirectly computed from elasticsearch configurations which makes it hard to ascertain their real values used during runtime because introduced a way to indirectly influence the recovery speed indices recovery max bytes per sec by configuring three key bandwidth metrics settings snapshot recovery rate limit max restore bytes per sec is already capped by the recovery rate limit aims to tie and potentially cap the snapshot speed max snapshot bytes per sec to the recovery limit when the node bandwidth metrics settings are configured apart from the existing snapshot speed configuration furthermore some of the aforementioned limits can be configured per node and or repository and or at runtime to make observability of these rate limits easier the proposal is to expose the final used rate limit values considering also any cap e g by the recovery rate limit in the node stats snapshot rate limits will be reported per repository apart from the speed the proposal is to also expose the throttling times more prominently specifically exposes the snapshot and snapshot restore throttle times but this is not useful for live usage we could expose the throttle times in node stats per repository recovery throttling is already exposed in under node id indices recovery throttle time i think no change is needed for recovery throttling stats ,1
714240,24555317990.0,IssuesEvent,2022-10-12 15:26:19,AY2223S1-CS2103T-W17-2/tp,https://api.github.com/repos/AY2223S1-CS2103T-W17-2/tp,closed,Delete income,type.story priority.HIGH,"As a student with a source of income, I can delete income so that I can remove any wrong income records",1.0,"Delete income - As a student with a source of income, I can delete income so that I can remove any wrong income records",0,delete income as a student with a source of income i can delete income so that i can remove any wrong income records,0
738,9958391923.0,IssuesEvent,2019-07-05 20:54:20,nbs-system/snuffleupagus,https://api.github.com/repos/nbs-system/snuffleupagus,closed,.deb package doesn't seem to work,critical portability question,"I'm trying to install it inside a wordpress:php7.3 container, and unfortunately it does not seem to want to install at all.
Perhaps because it's debian, not ubuntu?
```
root@49f19cf372ad:/var/www/html# cat /etc/os-release
PRETTY_NAME=""Debian GNU/Linux 9 (stretch)""
NAME=""Debian GNU/Linux""
VERSION_ID=""9""
VERSION=""9 (stretch)""
ID=debian
HOME_URL=""https://www.debian.org/""
SUPPORT_URL=""https://www.debian.org/support""
BUG_REPORT_URL=""https://bugs.debian.org/""
```
Failure output:
```
root@49f19cf372ad:/var/www/html# apt install ./snuffleupagus_0.5.0_amd64.deb
Reading package lists... Error!
E: Sub-process Popen returned an error code (2)
E: Encountered a section with no Package: header
E: Problem with MergeList /var/www/html/snuffleupagus_0.5.0_amd64.deb
E: The package lists or status file could not be parsed or opened.
root@49f19cf372ad:/var/www/html# apt-get install ./snuffleupagus_0.5.0_amd64.deb
Reading package lists... Error!
E: Sub-process Popen returned an error code (2)
E: Encountered a section with no Package: header
E: Problem with MergeList /var/www/html/snuffleupagus_0.5.0_amd64.deb
E: The package lists or status file could not be parsed or opened.
root@49f19cf372ad:/var/www/html# dpkg -i ./snuffleupagus_0.5.0_amd64.deb
dpkg-deb: error: './snuffleupagus_0.5.0_amd64.deb' is not a debian format archive
dpkg: error processing archive ./snuffleupagus_0.5.0_amd64.deb (--install):
subprocess dpkg-deb --control returned error exit status 2
Errors were encountered while processing:
./snuffleupagus_0.5.0_amd64.deb
```",True,".deb package doesn't seem to work - I'm trying to install it inside a wordpress:php7.3 container, and unfortunately it does not seem to want to install at all.
Perhaps because it's debian, not ubuntu?
```
root@49f19cf372ad:/var/www/html# cat /etc/os-release
PRETTY_NAME=""Debian GNU/Linux 9 (stretch)""
NAME=""Debian GNU/Linux""
VERSION_ID=""9""
VERSION=""9 (stretch)""
ID=debian
HOME_URL=""https://www.debian.org/""
SUPPORT_URL=""https://www.debian.org/support""
BUG_REPORT_URL=""https://bugs.debian.org/""
```
Failure output:
```
root@49f19cf372ad:/var/www/html# apt install ./snuffleupagus_0.5.0_amd64.deb
Reading package lists... Error!
E: Sub-process Popen returned an error code (2)
E: Encountered a section with no Package: header
E: Problem with MergeList /var/www/html/snuffleupagus_0.5.0_amd64.deb
E: The package lists or status file could not be parsed or opened.
root@49f19cf372ad:/var/www/html# apt-get install ./snuffleupagus_0.5.0_amd64.deb
Reading package lists... Error!
E: Sub-process Popen returned an error code (2)
E: Encountered a section with no Package: header
E: Problem with MergeList /var/www/html/snuffleupagus_0.5.0_amd64.deb
E: The package lists or status file could not be parsed or opened.
root@49f19cf372ad:/var/www/html# dpkg -i ./snuffleupagus_0.5.0_amd64.deb
dpkg-deb: error: './snuffleupagus_0.5.0_amd64.deb' is not a debian format archive
dpkg: error processing archive ./snuffleupagus_0.5.0_amd64.deb (--install):
subprocess dpkg-deb --control returned error exit status 2
Errors were encountered while processing:
./snuffleupagus_0.5.0_amd64.deb
```",1, deb package doesn t seem to work i m trying to install it inside a wordpress container and unfortunately it does not seem to want to install at all perhaps because it s debian not ubuntu root var www html cat etc os release pretty name debian gnu linux stretch name debian gnu linux version id version stretch id debian home url support url bug report url failure output root var www html apt install snuffleupagus deb reading package lists error e sub process popen returned an error code e encountered a section with no package header e problem with mergelist var www html snuffleupagus deb e the package lists or status file could not be parsed or opened root var www html apt get install snuffleupagus deb reading package lists error e sub process popen returned an error code e encountered a section with no package header e problem with mergelist var www html snuffleupagus deb e the package lists or status file could not be parsed or opened root var www html dpkg i snuffleupagus deb dpkg deb error snuffleupagus deb is not a debian format archive dpkg error processing archive snuffleupagus deb install subprocess dpkg deb control returned error exit status errors were encountered while processing snuffleupagus deb ,1
294754,22162186852.0,IssuesEvent,2022-06-04 17:08:12,msz/hammox,https://api.github.com/repos/msz/hammox,closed,explain Hammox's stance on maybe_improper_list,documentation,There is controversy about how improper list should be treated (see https://github.com/josefs/Gradualizer/issues/110). Explain the stance Hammox takes on this issue.,1.0,explain Hammox's stance on maybe_improper_list - There is controversy about how improper list should be treated (see https://github.com/josefs/Gradualizer/issues/110). Explain the stance Hammox takes on this issue.,0,explain hammox s stance on maybe improper list there is controversy about how improper list should be treated see explain the stance hammox takes on this issue ,0
5378,7887439810.0,IssuesEvent,2018-06-27 18:28:55,openopps/openopps-platform,https://api.github.com/repos/openopps/openopps-platform,closed,Profile buttons out of line,Bug Profile Requirements Ready,"Steps to reproduce:
1) Go to profile
2) Click edit
- save and discard buttons are not lined up

",1.0,"Profile buttons out of line - Steps to reproduce:
1) Go to profile
2) Click edit
- save and discard buttons are not lined up

",0,profile buttons out of line steps to reproduce go to profile click edit save and discard buttons are not lined up ,0
281,5332388226.0,IssuesEvent,2017-02-15 21:57:07,OpenSlides/OpenSlides,https://api.github.com/repos/OpenSlides/OpenSlides,closed,"Windows portable creates ""get_win32_portable_user_data_path()"" directory after start",high portable,"@normanjaeckel as discussed:
Start openslides portable (e.g. 2.1b3).
Main script creates settings.py successfully.
Then a directory ""get_win32_portable_user_data_path()"" is created.
see settings.py:
`OPENSLIDES_USER_DATA_PATH = get_win32_portable_user_data_path()`
expected:
Should create this new directory: .\openslides\static\
",True,"Windows portable creates ""get_win32_portable_user_data_path()"" directory after start - @normanjaeckel as discussed:
Start openslides portable (e.g. 2.1b3).
Main script creates settings.py successfully.
Then a directory ""get_win32_portable_user_data_path()"" is created.
see settings.py:
`OPENSLIDES_USER_DATA_PATH = get_win32_portable_user_data_path()`
expected:
Should create this new directory: .\openslides\static\
",1,windows portable creates get portable user data path directory after start normanjaeckel as discussed start openslides portable e g main script creates settings py successfully then a directory get portable user data path is created see settings py openslides user data path get portable user data path expected should create this new directory openslides static ,1
209,4349669190.0,IssuesEvent,2016-07-30 18:31:51,PHPOffice/PHPWord,https://api.github.com/repos/PHPOffice/PHPWord,closed,Remove TCPDF from the list of PDF renderers,HTML Open XML (Word 2007+) Portable Document (PDF),"Hi, @PHPOffice/phpword-team.
To get PDF from OOXML we use intermediary conversion into HTML (OOXML -> HTML -> PDF). For the second step we offer three options: dompdf, mPDF, TCPDF. As a matter of fact, [TCPDF is not intended for HTML-to-PDF conversion] (http://sourceforge.net/p/tcpdf/feature-requests/324/#98d1). I foresee lot of problems with this fact and suggest to remove TCPDF from the list.",True,"Remove TCPDF from the list of PDF renderers - Hi, @PHPOffice/phpword-team.
To get PDF from OOXML we use intermediary conversion into HTML (OOXML -> HTML -> PDF). For the second step we offer three options: dompdf, mPDF, TCPDF. As a matter of fact, [TCPDF is not intended for HTML-to-PDF conversion] (http://sourceforge.net/p/tcpdf/feature-requests/324/#98d1). I foresee lot of problems with this fact and suggest to remove TCPDF from the list.",1,remove tcpdf from the list of pdf renderers hi phpoffice phpword team to get pdf from ooxml we use intermediary conversion into html ooxml html pdf for the second step we offer three options dompdf mpdf tcpdf as a matter of fact i foresee lot of problems with this fact and suggest to remove tcpdf from the list ,1
151478,5821016593.0,IssuesEvent,2017-05-06 01:04:30,paceuniversity/CS3892017team2,https://api.github.com/repos/paceuniversity/CS3892017team2,closed,US9 - Adding cities to map - 8 hours,High Priority Sprint 1 Task1,"We would have multiple maps for each dynasty, this would be done in photoshop. These cities would also be located accurately to ancient china.
Should Include:
- Major Cities
- Correct locations",1.0,"US9 - Adding cities to map - 8 hours - We would have multiple maps for each dynasty, this would be done in photoshop. These cities would also be located accurately to ancient china.
Should Include:
- Major Cities
- Correct locations",0, adding cities to map hours we would have multiple maps for each dynasty this would be done in photoshop these cities would also be located accurately to ancient china should include major cities correct locations,0
142602,5476751791.0,IssuesEvent,2017-03-11 23:43:05,TauCetiStation/TauCetiClassic,https://api.github.com/repos/TauCetiStation/TauCetiClassic,reopened,ИИ и его Вайп кор после смерти,bug priority: low,"#### Подробное описание проблемы
ИИ может вайпнуться будучи мёртвым
#### Что должно было произойти
ИИ не должен смочь вайпнуться
#### Что произошло на самом деле
Он смог вайпнуться
#### Как повторить
Зайти за ИИ умереть, нажать Wipe Core
#### Дополнительная информация:
Было бы неплохо ИИ сделать не моментальную отгрузку вообще, ибо когда его штурмуют он просто вайпается и через 30 минут опять заходит и продолжает мстить ролькам. В последний раз когда я штурмовал ИИ в соло. Я начал в него стрелять, он нажав кнопку прямо пред мною просто вайпнулся, что ни есть хорошо. Далее он просто перезашёл в раунд через 30 минут
",1.0,"ИИ и его Вайп кор после смерти - #### Подробное описание проблемы
ИИ может вайпнуться будучи мёртвым
#### Что должно было произойти
ИИ не должен смочь вайпнуться
#### Что произошло на самом деле
Он смог вайпнуться
#### Как повторить
Зайти за ИИ умереть, нажать Wipe Core
#### Дополнительная информация:
Было бы неплохо ИИ сделать не моментальную отгрузку вообще, ибо когда его штурмуют он просто вайпается и через 30 минут опять заходит и продолжает мстить ролькам. В последний раз когда я штурмовал ИИ в соло. Я начал в него стрелять, он нажав кнопку прямо пред мною просто вайпнулся, что ни есть хорошо. Далее он просто перезашёл в раунд через 30 минут
",0,ии и его вайп кор после смерти подробное описание проблемы ии может вайпнуться будучи мёртвым что должно было произойти ии не должен смочь вайпнуться что произошло на самом деле он смог вайпнуться как повторить зайти за ии умереть нажать wipe core дополнительная информация было бы неплохо ии сделать не моментальную отгрузку вообще ибо когда его штурмуют он просто вайпается и через минут опять заходит и продолжает мстить ролькам в последний раз когда я штурмовал ии в соло я начал в него стрелять он нажав кнопку прямо пред мною просто вайпнулся что ни есть хорошо далее он просто перезашёл в раунд через минут ,0
3565,2679500848.0,IssuesEvent,2015-03-26 16:59:21,tastejs/todomvc,https://api.github.com/repos/tastejs/todomvc,closed,Clear completed integration tests are failing as a result of the new UI,bug failing-tests,"since we now set `clear completed` in CSS https://github.com/tastejs/todomvc-app-css/blob/master/index.css#L325 the integration tests that we have no longer pass.
This is unfortunate because it impacts any app that has upgraded to the new UI. We should investigate how we would like to resolve it, since it would be great to get a passing suite once more.
:+1: :sparkles: :spaghetti: ",1.0,"Clear completed integration tests are failing as a result of the new UI - since we now set `clear completed` in CSS https://github.com/tastejs/todomvc-app-css/blob/master/index.css#L325 the integration tests that we have no longer pass.
This is unfortunate because it impacts any app that has upgraded to the new UI. We should investigate how we would like to resolve it, since it would be great to get a passing suite once more.
:+1: :sparkles: :spaghetti: ",0,clear completed integration tests are failing as a result of the new ui since we now set clear completed in css the integration tests that we have no longer pass this is unfortunate because it impacts any app that has upgraded to the new ui we should investigate how we would like to resolve it since it would be great to get a passing suite once more sparkles spaghetti ,0
148,3620725983.0,IssuesEvent,2016-02-08 21:08:22,npm/npm,https://api.github.com/repos/npm/npm,closed,freshness self-check,feature-request supportability ux,"In https://github.com/npm/npm/issues/10800#issuecomment-181439557, @rosskevin says:
> It's easy to forget to update npm, I wish it had a reminder for me to keep it up to date when it falls behind. Bower does this and I find it quite useful
Homebrew does something like this as well.
Given that it's only going to get more important over time to have an up-to-date npm (at some point, a new Node release will become available that will break pretty much every old release of npm), and given how much of our support traffic comes from users stuck on antique versions of npm, the sooner the CLI team implements something like this, the sooner it will start to become useful.",True,"freshness self-check - In https://github.com/npm/npm/issues/10800#issuecomment-181439557, @rosskevin says:
> It's easy to forget to update npm, I wish it had a reminder for me to keep it up to date when it falls behind. Bower does this and I find it quite useful
Homebrew does something like this as well.
Given that it's only going to get more important over time to have an up-to-date npm (at some point, a new Node release will become available that will break pretty much every old release of npm), and given how much of our support traffic comes from users stuck on antique versions of npm, the sooner the CLI team implements something like this, the sooner it will start to become useful.",1,freshness self check in rosskevin says it s easy to forget to update npm i wish it had a reminder for me to keep it up to date when it falls behind bower does this and i find it quite useful homebrew does something like this as well given that it s only going to get more important over time to have an up to date npm at some point a new node release will become available that will break pretty much every old release of npm and given how much of our support traffic comes from users stuck on antique versions of npm the sooner the cli team implements something like this the sooner it will start to become useful ,1
491763,14170740167.0,IssuesEvent,2020-11-12 14:54:04,googleapis/repo-automation-bots,https://api.github.com/repos/googleapis/repo-automation-bots,closed,MoG: not merging an approved PR,bot: merge on green priority: p2 type: bug,"https://github.com/GoogleCloudPlatform/golang-samples/pull/1821 is approved and all checks have passed. MoG reacted with :eyes:. But, the PR hasn't been merged.",1.0,"MoG: not merging an approved PR - https://github.com/GoogleCloudPlatform/golang-samples/pull/1821 is approved and all checks have passed. MoG reacted with :eyes:. But, the PR hasn't been merged.",0,mog not merging an approved pr is approved and all checks have passed mog reacted with eyes but the pr hasn t been merged ,0
1141,14599931337.0,IssuesEvent,2020-12-21 05:39:08,MicrosoftDocs/sql-docs,https://api.github.com/repos/MicrosoftDocs/sql-docs,closed,Useless.,Pri1 product-feedback sql/prod supportability/tech,"SHOW THE FUCKING SQL STATEMENT FFS.
[Enter feedback here]
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 6fb26d1f-f4f6-15cf-344c-412eca5c0814
* Version Independent ID: 60751c53-d023-b48c-403e-34621d405618
* Content: [View a Database Snapshot (SQL Server) - SQL Server](https://docs.microsoft.com/en-us/sql/relational-databases/databases/view-a-database-snapshot-sql-server?view=sql-server-ver15#TsqlProcedure)
* Content Source: [docs/relational-databases/databases/view-a-database-snapshot-sql-server.md](https://github.com/MicrosoftDocs/sql-docs/blob/live/docs/relational-databases/databases/view-a-database-snapshot-sql-server.md)
* Product: **sql**
* Technology: **supportability**
* GitHub Login: @stevestein
* Microsoft Alias: **sstein**",True,"Useless. - SHOW THE FUCKING SQL STATEMENT FFS.
[Enter feedback here]
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 6fb26d1f-f4f6-15cf-344c-412eca5c0814
* Version Independent ID: 60751c53-d023-b48c-403e-34621d405618
* Content: [View a Database Snapshot (SQL Server) - SQL Server](https://docs.microsoft.com/en-us/sql/relational-databases/databases/view-a-database-snapshot-sql-server?view=sql-server-ver15#TsqlProcedure)
* Content Source: [docs/relational-databases/databases/view-a-database-snapshot-sql-server.md](https://github.com/MicrosoftDocs/sql-docs/blob/live/docs/relational-databases/databases/view-a-database-snapshot-sql-server.md)
* Product: **sql**
* Technology: **supportability**
* GitHub Login: @stevestein
* Microsoft Alias: **sstein**",1,useless show the fucking sql statement ffs document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product sql technology supportability github login stevestein microsoft alias sstein ,1
176151,14565018345.0,IssuesEvent,2020-12-17 06:26:39,algbio/practical-omnitigs,https://api.github.com/repos/algbio/practical-omnitigs,closed,Release 0.2.0,documentation,"After implementing the omnitig algorithm and our experiments completely, we should do another release.
* [x] skim through all READMEs and make sure they are up to date
* [x] figure out what else there is to do",1.0,"Release 0.2.0 - After implementing the omnitig algorithm and our experiments completely, we should do another release.
* [x] skim through all READMEs and make sure they are up to date
* [x] figure out what else there is to do",0,release after implementing the omnitig algorithm and our experiments completely we should do another release skim through all readmes and make sure they are up to date figure out what else there is to do,0
1350,19291987542.0,IssuesEvent,2021-12-12 00:05:09,verilator/verilator,https://api.github.com/repos/verilator/verilator,closed,Bashisms in the configure file: ./configure: CFLAGS+= : not found,resolution: fixed area: portability,"```configure``` prints this:
```
./configure: CFLAGS+= : not found
./configure: CPPFLAGS+= : not found
./configure: CXXFLAGS+= : not found
./configure: LDFLAGS+= : not found
```
configure is commonly expected to be Bourne shell compatible, but on Linux /bin/sh is a symlink to Bash.
Version: 4.212
OS: FreeBSD 13",True,"Bashisms in the configure file: ./configure: CFLAGS+= : not found - ```configure``` prints this:
```
./configure: CFLAGS+= : not found
./configure: CPPFLAGS+= : not found
./configure: CXXFLAGS+= : not found
./configure: LDFLAGS+= : not found
```
configure is commonly expected to be Bourne shell compatible, but on Linux /bin/sh is a symlink to Bash.
Version: 4.212
OS: FreeBSD 13",1,bashisms in the configure file configure cflags not found configure prints this configure cflags not found configure cppflags not found configure cxxflags not found configure ldflags not found configure is commonly expected to be bourne shell compatible but on linux bin sh is a symlink to bash version os freebsd ,1
793433,27996420145.0,IssuesEvent,2023-03-27 08:47:40,slsdetectorgroup/slsDetectorPackage,https://api.github.com/repos/slsdetectorgroup/slsDetectorPackage,closed,Eiger: 8 bit mode compression,action - Enhancement priority - High status - Awaiting info,"
##### *Detector type:
Eiger
##### *Software Package Version:
##### Priority:
High
##### *State the feature:
handle 8 bit compressed data from detector
##### Is your feature request related to a problem. Please describe:
##### Describe the solution you'd like:
##### Describe alternatives you've considered:
##### Additional context:
tbd
@mbrueckner-psi
@erikfrojdh ",1.0,"Eiger: 8 bit mode compression -
##### *Detector type:
Eiger
##### *Software Package Version:
##### Priority:
High
##### *State the feature:
handle 8 bit compressed data from detector
##### Is your feature request related to a problem. Please describe:
##### Describe the solution you'd like:
##### Describe alternatives you've considered:
##### Additional context:
tbd
@mbrueckner-psi
@erikfrojdh ",0,eiger bit mode compression detector type eiger software package version priority high state the feature handle bit compressed data from detector is your feature request related to a problem please describe describe the solution you d like describe alternatives you ve considered additional context tbd mbrueckner psi erikfrojdh ,0
100712,30759420535.0,IssuesEvent,2023-07-29 13:48:44,chaotic-aur/packages,https://api.github.com/repos/chaotic-aur/packages,closed,[Outdated] linux-nitrous,waiting:upstream-fix request:rebuild-pkg bug:PKGBUILD,"### If available, link to the latest build
[linux-nitrous.log](https://builds.garudalinux.org/repos/chaotic-aur/logs/linux-nitrous.log)
### Package name
`linux-nitrous`
### Latest build
`6.2.13`
### Latest version available
`6.3.1`
### Have you tested if the package builds in a clean chroot?
- [ ] Yes
### More information
Fails to build following the release of 6.3 seemingly due to a new build dependency on python3 which is not installed.",2.0,"[Outdated] linux-nitrous - ### If available, link to the latest build
[linux-nitrous.log](https://builds.garudalinux.org/repos/chaotic-aur/logs/linux-nitrous.log)
### Package name
`linux-nitrous`
### Latest build
`6.2.13`
### Latest version available
`6.3.1`
### Have you tested if the package builds in a clean chroot?
- [ ] Yes
### More information
Fails to build following the release of 6.3 seemingly due to a new build dependency on python3 which is not installed.",0, linux nitrous if available link to the latest build package name linux nitrous latest build latest version available have you tested if the package builds in a clean chroot yes more information fails to build following the release of seemingly due to a new build dependency on which is not installed ,0
346981,10422545993.0,IssuesEvent,2019-09-16 09:16:49,ushahidi/opendesign,https://api.github.com/repos/ushahidi/opendesign,closed,Methdology #1,Content Highest Priority In progress Methodology,"The first version of the Methodology to test at the 1st event
WIP Document here: https://docs.google.com/document/d/136xY-e6Vrr6Fes0t19Jh64qUbzes7TNsQrK7fncuimA/edit?usp=sharing
## Update:
Methodlogy has been reviewed, amended and split into two:
1 - The public-facing replicable document that serves as a 'guide': https://docs.google.com/document/d/136xY-e6Vrr6Fes0t19Jh64qUbzes7TNsQrK7fncuimA/edit?usp=sharing
2 - The internal document intended to help guide or research gathering insight: https://docs.google.com/document/d/1lkaq7VA3INTwh4M941KYt-IQ-whhWm7W9k5cm436UDw/edit?usp=sharing",1.0,"Methdology #1 - The first version of the Methodology to test at the 1st event
WIP Document here: https://docs.google.com/document/d/136xY-e6Vrr6Fes0t19Jh64qUbzes7TNsQrK7fncuimA/edit?usp=sharing
## Update:
Methodlogy has been reviewed, amended and split into two:
1 - The public-facing replicable document that serves as a 'guide': https://docs.google.com/document/d/136xY-e6Vrr6Fes0t19Jh64qUbzes7TNsQrK7fncuimA/edit?usp=sharing
2 - The internal document intended to help guide or research gathering insight: https://docs.google.com/document/d/1lkaq7VA3INTwh4M941KYt-IQ-whhWm7W9k5cm436UDw/edit?usp=sharing",0,methdology the first version of the methodology to test at the event wip document here update methodlogy has been reviewed amended and split into two the public facing replicable document that serves as a guide the internal document intended to help guide or research gathering insight ,0
99989,21097606434.0,IssuesEvent,2022-04-04 11:51:06,Regalis11/Barotrauma,https://api.github.com/repos/Regalis11/Barotrauma,closed,Money command's character argument doesn't work,Bug Code,"@Regalis11 commented on [Thu Mar 24 2022](https://github.com/Regalis11/Barotrauma-development/issues/3215)
- [x] I have searched the issue tracker to check if the issue has already been reported.
**Description**
Money command's character argument doesn't work, the money always goes to the bank.
**Steps To Reproduce**
Try to give money to a specific character in mp with the `money` command.
**Version**
v0.17.1.0 and later
",1.0,"Money command's character argument doesn't work - @Regalis11 commented on [Thu Mar 24 2022](https://github.com/Regalis11/Barotrauma-development/issues/3215)
- [x] I have searched the issue tracker to check if the issue has already been reported.
**Description**
Money command's character argument doesn't work, the money always goes to the bank.
**Steps To Reproduce**
Try to give money to a specific character in mp with the `money` command.
**Version**
v0.17.1.0 and later
",0,money command s character argument doesn t work commented on i have searched the issue tracker to check if the issue has already been reported description money command s character argument doesn t work the money always goes to the bank steps to reproduce try to give money to a specific character in mp with the money command version and later ,0
436017,12544135055.0,IssuesEvent,2020-06-05 16:43:04,oppia/oppia-android,https://api.github.com/repos/oppia/oppia-android,closed, HomeFragment - Tablet (Landscape) (Lowfi),Priority: Essential Status: Pending verification Type: Task Where: Starting flows Workstream: Lowfi UI,"Mocks: https://xd.adobe.com/view/d405de00-a871-4f0f-73a0-f8acef30349b-a234/screen/5434c52d-b32b-4666-8b28-cf03b3cbd4cd/L-Home-Screen
Implement low-fi UI for **HomeFragment** tablet landscape mode
**Target PR date**: 7 June 2020
**Target completion date**: 10 June 2020",1.0," HomeFragment - Tablet (Landscape) (Lowfi) - Mocks: https://xd.adobe.com/view/d405de00-a871-4f0f-73a0-f8acef30349b-a234/screen/5434c52d-b32b-4666-8b28-cf03b3cbd4cd/L-Home-Screen
Implement low-fi UI for **HomeFragment** tablet landscape mode
**Target PR date**: 7 June 2020
**Target completion date**: 10 June 2020",0, homefragment tablet landscape lowfi mocks implement low fi ui for homefragment tablet landscape mode target pr date june target completion date june ,0
829,10616457054.0,IssuesEvent,2019-10-12 11:53:29,OpenXRay/xray-16,https://api.github.com/repos/OpenXRay/xray-16,closed,Is it possible to build engine on macOS?,Portability Question,"The only dependency I've failed to resolve is liblockfile-dev.
Is it possible theoretically?",True,"Is it possible to build engine on macOS? - The only dependency I've failed to resolve is liblockfile-dev.
Is it possible theoretically?",1,is it possible to build engine on macos the only dependency i ve failed to resolve is liblockfile dev is it possible theoretically ,1
529144,15380866903.0,IssuesEvent,2021-03-02 21:45:00,mantidproject/mantid,https://api.github.com/repos/mantidproject/mantid,closed,A few issues with VSI,Low Priority MantidPlot Stale Vates,"This issue was originally [TRAC 10783](http://trac.mantidproject.org/mantid/ticket/10783)
1. Dialogs hide VSI behind MantidPlot.
2. ThreeSlice: slices can rotate (middle button). Reset tool button doesn't cancel the rotation.
3. View settings General page doesn't make it clear colour of which component is being set here. The preset colours in the drop-down boxes are those that are used for edges, text, etc. I don't think that they are useful.
4. View settings General page: Use Parallel Projection check box isn't synced with the tool button on the main view. The same is for the Orientation Axes on the Annotation page.
5. Axis Label Color doesn't stay set to the custom value: after switching view (e.g. from Standard to MultiSlice and back) it changes back to default.
6. I am not sure if Zoom to Box works correctly. After a few mouse zooms it stops working at all.
---
Keywords: VSI
",1.0,"A few issues with VSI - This issue was originally [TRAC 10783](http://trac.mantidproject.org/mantid/ticket/10783)
1. Dialogs hide VSI behind MantidPlot.
2. ThreeSlice: slices can rotate (middle button). Reset tool button doesn't cancel the rotation.
3. View settings General page doesn't make it clear colour of which component is being set here. The preset colours in the drop-down boxes are those that are used for edges, text, etc. I don't think that they are useful.
4. View settings General page: Use Parallel Projection check box isn't synced with the tool button on the main view. The same is for the Orientation Axes on the Annotation page.
5. Axis Label Color doesn't stay set to the custom value: after switching view (e.g. from Standard to MultiSlice and back) it changes back to default.
6. I am not sure if Zoom to Box works correctly. After a few mouse zooms it stops working at all.
---
Keywords: VSI
",0,a few issues with vsi this issue was originally dialogs hide vsi behind mantidplot threeslice slices can rotate middle button reset tool button doesn t cancel the rotation view settings general page doesn t make it clear colour of which component is being set here the preset colours in the drop down boxes are those that are used for edges text etc i don t think that they are useful view settings general page use parallel projection check box isn t synced with the tool button on the main view the same is for the orientation axes on the annotation page axis label color doesn t stay set to the custom value after switching view e g from standard to multislice and back it changes back to default i am not sure if zoom to box works correctly after a few mouse zooms it stops working at all keywords vsi ,0
1783,26206435721.0,IssuesEvent,2023-01-03 23:17:11,golang/vulndb,https://api.github.com/repos/golang/vulndb,closed,x/vulndb: potential Go vuln in github.com/jessfraz/pastebinit: CVE-2018-25059,excluded: NOT_IMPORTABLE,"CVE-2018-25059 references [github.com/jessfraz/pastebinit](https://github.com/jessfraz/pastebinit), which may be a Go module.
Description:
A vulnerability was found in pastebinit up to 0.2.2 and classified as critical. Affected by this issue is the function pasteHandler of the file server.go. The manipulation of the argument r.URL.Path leads to path traversal. Upgrading to version 0.2.3 is able to address this issue. The name of the patch is 1af2facb6d95976c532b7f8f82747d454a092272. It is recommended to upgrade the affected component. The identifier of this vulnerability is VDB-217040.
References:
- NIST: https://nvd.nist.gov/vuln/detail/CVE-2018-25059
- JSON: https://github.com/CVEProject/cvelist/tree/b4495c7a770bb3c933d6b51ee1aa9b8af831654f/2018/25xxx/CVE-2018-25059.json
- web: https://vuldb.com/?id.217040
- web: https://vuldb.com/?ctiid.217040
- fix: https://github.com/jessfraz/pastebinit/pull/3
- fix: https://github.com/jessfraz/pastebinit/commit/1af2facb6d95976c532b7f8f82747d454a092272
- web: https://github.com/jessfraz/pastebinit/releases/tag/v0.2.3
- Imported by: https://pkg.go.dev/github.com/jessfraz/pastebinit?tab=importedby
Cross references:
No existing reports found with this module or alias.
See [doc/triage.md](https://github.com/golang/vulndb/blob/master/doc/triage.md) for instructions on how to triage this report.
```
modules:
- module: github.com/jessfraz/pastebinit
packages:
- package: pastebinit
description: |
A vulnerability was found in pastebinit up to 0.2.2 and classified as critical. Affected by this issue is the function pasteHandler of the file server.go. The manipulation of the argument r.URL.Path leads to path traversal. Upgrading to version 0.2.3 is able to address this issue. The name of the patch is 1af2facb6d95976c532b7f8f82747d454a092272. It is recommended to upgrade the affected component. The identifier of this vulnerability is VDB-217040.
Eine Schwachstelle wurde in pastebinit bis 0.2.2 gefunden. Sie wurde als kritisch eingestuft. Davon betroffen ist die Funktion pasteHandler der Datei server.go. Durch Beeinflussen des Arguments r.URL.Path mit unbekannten Daten kann eine path traversal-Schwachstelle ausgenutzt werden. Ein Aktualisieren auf die Version 0.2.3 vermag dieses Problem zu lösen. Der Patch wird als 1af2facb6d95976c532b7f8f82747d454a092272 bezeichnet. Als bestmögliche Massnahme wird das Einspielen eines Upgrades empfohlen.
cves:
- CVE-2018-25059
references:
- web: https://vuldb.com/?id.217040
- web: https://vuldb.com/?ctiid.217040
- fix: https://github.com/jessfraz/pastebinit/pull/3
- fix: https://github.com/jessfraz/pastebinit/commit/1af2facb6d95976c532b7f8f82747d454a092272
- web: https://github.com/jessfraz/pastebinit/releases/tag/v0.2.3
```",True,"x/vulndb: potential Go vuln in github.com/jessfraz/pastebinit: CVE-2018-25059 - CVE-2018-25059 references [github.com/jessfraz/pastebinit](https://github.com/jessfraz/pastebinit), which may be a Go module.
Description:
A vulnerability was found in pastebinit up to 0.2.2 and classified as critical. Affected by this issue is the function pasteHandler of the file server.go. The manipulation of the argument r.URL.Path leads to path traversal. Upgrading to version 0.2.3 is able to address this issue. The name of the patch is 1af2facb6d95976c532b7f8f82747d454a092272. It is recommended to upgrade the affected component. The identifier of this vulnerability is VDB-217040.
References:
- NIST: https://nvd.nist.gov/vuln/detail/CVE-2018-25059
- JSON: https://github.com/CVEProject/cvelist/tree/b4495c7a770bb3c933d6b51ee1aa9b8af831654f/2018/25xxx/CVE-2018-25059.json
- web: https://vuldb.com/?id.217040
- web: https://vuldb.com/?ctiid.217040
- fix: https://github.com/jessfraz/pastebinit/pull/3
- fix: https://github.com/jessfraz/pastebinit/commit/1af2facb6d95976c532b7f8f82747d454a092272
- web: https://github.com/jessfraz/pastebinit/releases/tag/v0.2.3
- Imported by: https://pkg.go.dev/github.com/jessfraz/pastebinit?tab=importedby
Cross references:
No existing reports found with this module or alias.
See [doc/triage.md](https://github.com/golang/vulndb/blob/master/doc/triage.md) for instructions on how to triage this report.
```
modules:
- module: github.com/jessfraz/pastebinit
packages:
- package: pastebinit
description: |
A vulnerability was found in pastebinit up to 0.2.2 and classified as critical. Affected by this issue is the function pasteHandler of the file server.go. The manipulation of the argument r.URL.Path leads to path traversal. Upgrading to version 0.2.3 is able to address this issue. The name of the patch is 1af2facb6d95976c532b7f8f82747d454a092272. It is recommended to upgrade the affected component. The identifier of this vulnerability is VDB-217040.
Eine Schwachstelle wurde in pastebinit bis 0.2.2 gefunden. Sie wurde als kritisch eingestuft. Davon betroffen ist die Funktion pasteHandler der Datei server.go. Durch Beeinflussen des Arguments r.URL.Path mit unbekannten Daten kann eine path traversal-Schwachstelle ausgenutzt werden. Ein Aktualisieren auf die Version 0.2.3 vermag dieses Problem zu lösen. Der Patch wird als 1af2facb6d95976c532b7f8f82747d454a092272 bezeichnet. Als bestmögliche Massnahme wird das Einspielen eines Upgrades empfohlen.
cves:
- CVE-2018-25059
references:
- web: https://vuldb.com/?id.217040
- web: https://vuldb.com/?ctiid.217040
- fix: https://github.com/jessfraz/pastebinit/pull/3
- fix: https://github.com/jessfraz/pastebinit/commit/1af2facb6d95976c532b7f8f82747d454a092272
- web: https://github.com/jessfraz/pastebinit/releases/tag/v0.2.3
```",1,x vulndb potential go vuln in github com jessfraz pastebinit cve cve references which may be a go module description a vulnerability was found in pastebinit up to and classified as critical affected by this issue is the function pastehandler of the file server go the manipulation of the argument r url path leads to path traversal upgrading to version is able to address this issue the name of the patch is it is recommended to upgrade the affected component the identifier of this vulnerability is vdb references nist json web web fix fix web imported by cross references no existing reports found with this module or alias see for instructions on how to triage this report modules module github com jessfraz pastebinit packages package pastebinit description a vulnerability was found in pastebinit up to and classified as critical affected by this issue is the function pastehandler of the file server go the manipulation of the argument r url path leads to path traversal upgrading to version is able to address this issue the name of the patch is it is recommended to upgrade the affected component the identifier of this vulnerability is vdb eine schwachstelle wurde in pastebinit bis gefunden sie wurde als kritisch eingestuft davon betroffen ist die funktion pastehandler der datei server go durch beeinflussen des arguments r url path mit unbekannten daten kann eine path traversal schwachstelle ausgenutzt werden ein aktualisieren auf die version vermag dieses problem zu lösen der patch wird als bezeichnet als bestmögliche massnahme wird das einspielen eines upgrades empfohlen cves cve references web web fix fix web ,1
259,5038086305.0,IssuesEvent,2016-12-18 02:06:20,svaarala/duktape,https://api.github.com/repos/svaarala/duktape,closed,Option to disable transcendental math functions,portability,"http://stackoverflow.com/questions/30471440/disable-transcendentals-in-duktape
Could add a feature option (DUK_OPT_xxx) to do this. Or could wait for `duk_config.h` rework to be merged and define a config option (DUK_USE_xxx) directly.
",True,"Option to disable transcendental math functions - http://stackoverflow.com/questions/30471440/disable-transcendentals-in-duktape
Could add a feature option (DUK_OPT_xxx) to do this. Or could wait for `duk_config.h` rework to be merged and define a config option (DUK_USE_xxx) directly.
",1,option to disable transcendental math functions could add a feature option duk opt xxx to do this or could wait for duk config h rework to be merged and define a config option duk use xxx directly ,1
517,7300951444.0,IssuesEvent,2018-02-27 02:18:30,librsync/librsync,https://api.github.com/repos/librsync/librsync,closed,CMake issue: Drop perl as a build dependency.,portability,"Hi,
I am trying to build from the latest master using CMake 3.6.0 and Visual Studio 2015.
Whenever I try and configure CMake I get these two errors:
PERL_EXECUTABLE-NOTFOUND
POPT_INCLUDE_DIRS-NOTFOUND
How can I make it work?
",True,"CMake issue: Drop perl as a build dependency. - Hi,
I am trying to build from the latest master using CMake 3.6.0 and Visual Studio 2015.
Whenever I try and configure CMake I get these two errors:
PERL_EXECUTABLE-NOTFOUND
POPT_INCLUDE_DIRS-NOTFOUND
How can I make it work?
",1,cmake issue drop perl as a build dependency hi i am trying to build from the latest master using cmake and visual studio whenever i try and configure cmake i get these two errors perl executable notfound popt include dirs notfound how can i make it work ,1
61139,6726507966.0,IssuesEvent,2017-10-17 10:06:33,LiskHQ/lisk-js,https://api.github.com/repos/LiskHQ/lisk-js,closed,Make tests atomic,parent refactoring test,"E.g. [this suite](https://github.com/LiskHQ/lisk-js/blob/development/test/transactions/delegate.js#L39) depends on [this test](https://github.com/LiskHQ/lisk-js/blob/development/test/transactions/delegate.js#L35). It would be better to have a `beforeEach` hook or similar so that individual tests can be run.
- [x] Separate test files for separate api modules - #267 - MERGED
- [x] Dapp transaction - #268 - MERGED
- [x] Delegate transaction - #269 - MERGED
- [x] Multisignature transaction - #270 - MERGED
- [x] Signature transaction - #271 - MERGED
- [x] Transaction transaction - #272 - MERGED
- [x] Transfer transaction - #273 - MERGED
- [x] Vote transaction - #274 - MERGED
- [x] Crypto - #275
- [x] Time - #276 - MERGED
- [x] LiskAPI - #277 - MERGED
- [x] Private API - #278 - MERGED
- [x] API utils - #279 - MERGED
- [x] Transaction utils - #280 - MERGED
- [x] Mnemonic - #281 - MERGED
- [x] Constants (shape only) - #352 - MERGED
- [x] Make sure all transaction tests have an id check - #353 - MERGED
- [x] Remove crypto stubbing - #354 - MERGED",1.0,"Make tests atomic - E.g. [this suite](https://github.com/LiskHQ/lisk-js/blob/development/test/transactions/delegate.js#L39) depends on [this test](https://github.com/LiskHQ/lisk-js/blob/development/test/transactions/delegate.js#L35). It would be better to have a `beforeEach` hook or similar so that individual tests can be run.
- [x] Separate test files for separate api modules - #267 - MERGED
- [x] Dapp transaction - #268 - MERGED
- [x] Delegate transaction - #269 - MERGED
- [x] Multisignature transaction - #270 - MERGED
- [x] Signature transaction - #271 - MERGED
- [x] Transaction transaction - #272 - MERGED
- [x] Transfer transaction - #273 - MERGED
- [x] Vote transaction - #274 - MERGED
- [x] Crypto - #275
- [x] Time - #276 - MERGED
- [x] LiskAPI - #277 - MERGED
- [x] Private API - #278 - MERGED
- [x] API utils - #279 - MERGED
- [x] Transaction utils - #280 - MERGED
- [x] Mnemonic - #281 - MERGED
- [x] Constants (shape only) - #352 - MERGED
- [x] Make sure all transaction tests have an id check - #353 - MERGED
- [x] Remove crypto stubbing - #354 - MERGED",0,make tests atomic e g depends on it would be better to have a beforeeach hook or similar so that individual tests can be run separate test files for separate api modules merged dapp transaction merged delegate transaction merged multisignature transaction merged signature transaction merged transaction transaction merged transfer transaction merged vote transaction merged crypto time merged liskapi merged private api merged api utils merged transaction utils merged mnemonic merged constants shape only merged make sure all transaction tests have an id check merged remove crypto stubbing merged,0
252,4893612469.0,IssuesEvent,2016-11-19 00:00:44,funcoeszz/funcoeszz,https://api.github.com/repos/funcoeszz/funcoeszz,closed,"Docker: zzloteria falhando, problemas no /tmp",portabilidade,"Tentei mapeando o `/tmp` do host e sem mapear também. O resultado foi o mesmo.
O primeiro erro é a timemania, mas esse parece ser não relacionado com os demais, que são os testes de consulta de histórico dos sorteios e estão com algum problema ao acessar arquivos no `/tmp`.
```console
$ docker run --rm -w /app/testador --entrypoint ./run -v $PWD:/app/ -v /tmp:/tmp funcoeszz zzloteria.sh
.......
--------------------------------------------------------------------------------
[FAILED #7, line 51] zzloteria timemania | tr '[0-9.]' 'N' | head -3
@@ -1,3 +1,2 @@
timemania:
-Concurso NNN (NN/NN/NNNN)
- NN NN NN NN NN NN NN
+
--------------------------------------------------------------------------------
..
--------------------------------------------------------------------------------
[FAILED #9, line 77] zzloteria loteca 500 | tr '[0-9]' 'N' | head -17 | sed 's/Col. Meio/ Col. N/'
@@ -1,17 +1,8 @@
+mv: cannot stat ‘/tmp/D_LOTECA.HTM’: No such file or directory
+
+Can't Access `file://localhost/tmp/zz.loteria.loteca.htm'
+Alert!: Unable to access document.
+
+lynx: Can't access startfile
loteca:
-Concurso NNN (NN/NN/NNNN)
- Jogo Resultado
- N Col. N
- N Col. N
- N Col. N
- N Col. N
- N Col. N
- N Col. N
- N Col. N
- N Col. N
- N Col. N
- NN Col. N
- NN Col. N
- NN Col. N
- NN Col. N
- NN Col. N
+
--------------------------------------------------------------------------------
.
--------------------------------------------------------------------------------
[FAILED #10, line 97] zzloteria timemania 600 | tr '[0-9]' 'N' | head -3
@@ -1,3 +1,8 @@
+mv: cannot stat ‘/tmp/D_TIMASC.HTM’: No such file or directory
+
+Can't Access `file://localhost/tmp/zz.loteria.timemania.htm'
+Alert!: Unable to access document.
+
+lynx: Can't access startfile
timemania:
-Concurso NNN (NN/NN/NNNN)
- NN NN NN NN NN NN NN
+
--------------------------------------------------------------------------------
.
--------------------------------------------------------------------------------
[FAILED #11, line 103] zzloteria federal 2500
@@ -1,9 +1,8 @@
federal:
-Concurso 02500 (11/01/1989)
+mv: cannot stat ‘/tmp/D_LOTFED.HTM’: No such file or directory
- 1º Premio 66069 R$ 200.000,00
- 2º Premio 77589 R$ 8.000,00
- 3º Premio 60325 R$ 5.000,00
- 4º Premio 03547 R$ 4.000,00
- 5º Premio 48642 R$ 2.000,00
+Can't Access `file://localhost/tmp/zz.loteria.federal.htm'
+Alert!: Unable to access document.
+
+lynx: Can't access startfile
--------------------------------------------------------------------------------
.
--------------------------------------------------------------------------------
[FAILED #12, line 115] zzloteria duplasena 1000
@@ -1,19 +1,8 @@
duplasena:
-Concurso 1000 (06/09/2011)
+mv: cannot stat ‘/tmp/D_DPLSEN.HTM’: No such file or directory
- 1º sorteio
- 07 09 18 42 45 46
+Can't Access `file://localhost/tmp/zz.loteria.duplasena.htm'
+Alert!: Unable to access document.
- 2º sorteio
- 03 13 31 32 36 39
-
- 1º Sorteio
- Sena Nao houve acertador
- Quina 55 R$ 2.224,60
- Quadra 2368 R$ 2368
-
- 2º Sorteio
- Sena 1 R$ 163.137,83
- Quina 77 R$ 1.589,00
- Quadra 2560 R$ 45,51
+lynx: Can't access startfile
--------------------------------------------------------------------------------
.
--------------------------------------------------------------------------------
[FAILED #13, line 137] zzloteria quina 3000
@@ -1,8 +1,8 @@
quina:
-Concurso 3000 (20/09/2012)
- 02 21 31 37 57
+mv: cannot stat ‘/tmp/D_QUINA.HTM’: No such file or directory
- Quina Nao houve acertador
- Quadra 54 R$ 6.681,90
- Terno 4528 R$ 113,83
+Can't Access `file://localhost/tmp/zz.loteria.quina.htm'
+Alert!: Unable to access document.
+
+lynx: Can't access startfile
--------------------------------------------------------------------------------
.
--------------------------------------------------------------------------------
[FAILED #14, line 148] zzloteria megasena 1111
@@ -1,8 +1,8 @@
megasena:
-Concurso 1111 (23/09/2009)
- 04 09 25 32 33 43
+mv: cannot stat ‘/tmp/d_megasc.htm’: No such file or directory
- Sena 1 R$ 2.100.928,15
- Quina 52 R$ 21.932,77
- Quadra 5266 R$ 309,39
+Can't Access `file://localhost/tmp/zz.loteria.megasena.htm'
+Alert!: Unable to access document.
+
+lynx: Can't access startfile
--------------------------------------------------------------------------------
.
--------------------------------------------------------------------------------
[FAILED #15, line 159] zzloteria lotomania 750
@@ -1,14 +1,8 @@
lotomania:
-Concurso 750 (18/08/2007)
- 04 07 19 22 24
- 30 38 41 45 53
- 57 66 68 72 75
- 77 80 83 92 94
+mv: cannot stat ‘/tmp/D_LOTMAN.HTM’: No such file or directory
- 20 pts. 5 R$ 1.364.031,89
- 19 pts. 66 R$ 7.504,25
- 18 pts. 595 R$ 832,40
- 17 pts. 4489 R$ 54,96
- 16 pts. 21493 R$ 11,48
- 0 pts. 3 R$ 82.546,71
+Can't Access `file://localhost/tmp/zz.loteria.lotomania.htm'
+Alert!: Unable to access document.
+
+lynx: Can't access startfile
--------------------------------------------------------------------------------
.
--------------------------------------------------------------------------------
[FAILED #16, line 176] zzloteria lotofacil 1099
@@ -1,12 +1,8 @@
lotofacil:
-Concurso 1099 (25/08/2014)
- 02 03 06 07 08
- 11 12 13 14 16
- 21 22 23 24 25
+mv: cannot stat ‘/tmp/D_LOTFAC.HTM’: No such file or directory
- 15 pts. Nao houve acertador!
- 14 pts. 445 R$ 1.278,83
- 13 pts. 13926 R$ 15,00
- 12 pts. 204099 R$ 6,00
- 11 pts. 1108572 R$ 3,00
+Can't Access `file://localhost/tmp/zz.loteria.lotofacil.htm'
+Alert!: Unable to access document.
+
+lynx: Can't access startfile
--------------------------------------------------------------------------------
FAIL: 9 of 16 tests failed
$
```
Rodando no host (sem Docker), dá problema apenas na duplasena:
```console
$ ./run zzloteria.sh
...
----------------------------------------------------------------------------------------------------------------------------------------------
[FAILED #3, line 13] zzloteria duplasena | tr '[0-9]' 'N' | sed '/^ *$/d;s/^ *//' | head -6
@@ -1,6 +1 @@
duplasena:
-Concurso NNNN (NN/NN/NNNN)
-Nº sorteio
-NN NN NN NN NN NN
-Nº sorteio
-NN NN NN NN NN NN
----------------------------------------------------------------------------------------------------------------------------------------------
.............
FAIL: 1 of 16 tests failed
$
```",True,"Docker: zzloteria falhando, problemas no /tmp - Tentei mapeando o `/tmp` do host e sem mapear também. O resultado foi o mesmo.
O primeiro erro é a timemania, mas esse parece ser não relacionado com os demais, que são os testes de consulta de histórico dos sorteios e estão com algum problema ao acessar arquivos no `/tmp`.
```console
$ docker run --rm -w /app/testador --entrypoint ./run -v $PWD:/app/ -v /tmp:/tmp funcoeszz zzloteria.sh
.......
--------------------------------------------------------------------------------
[FAILED #7, line 51] zzloteria timemania | tr '[0-9.]' 'N' | head -3
@@ -1,3 +1,2 @@
timemania:
-Concurso NNN (NN/NN/NNNN)
- NN NN NN NN NN NN NN
+
--------------------------------------------------------------------------------
..
--------------------------------------------------------------------------------
[FAILED #9, line 77] zzloteria loteca 500 | tr '[0-9]' 'N' | head -17 | sed 's/Col. Meio/ Col. N/'
@@ -1,17 +1,8 @@
+mv: cannot stat ‘/tmp/D_LOTECA.HTM’: No such file or directory
+
+Can't Access `file://localhost/tmp/zz.loteria.loteca.htm'
+Alert!: Unable to access document.
+
+lynx: Can't access startfile
loteca:
-Concurso NNN (NN/NN/NNNN)
- Jogo Resultado
- N Col. N
- N Col. N
- N Col. N
- N Col. N
- N Col. N
- N Col. N
- N Col. N
- N Col. N
- N Col. N
- NN Col. N
- NN Col. N
- NN Col. N
- NN Col. N
- NN Col. N
+
--------------------------------------------------------------------------------
.
--------------------------------------------------------------------------------
[FAILED #10, line 97] zzloteria timemania 600 | tr '[0-9]' 'N' | head -3
@@ -1,3 +1,8 @@
+mv: cannot stat ‘/tmp/D_TIMASC.HTM’: No such file or directory
+
+Can't Access `file://localhost/tmp/zz.loteria.timemania.htm'
+Alert!: Unable to access document.
+
+lynx: Can't access startfile
timemania:
-Concurso NNN (NN/NN/NNNN)
- NN NN NN NN NN NN NN
+
--------------------------------------------------------------------------------
.
--------------------------------------------------------------------------------
[FAILED #11, line 103] zzloteria federal 2500
@@ -1,9 +1,8 @@
federal:
-Concurso 02500 (11/01/1989)
+mv: cannot stat ‘/tmp/D_LOTFED.HTM’: No such file or directory
- 1º Premio 66069 R$ 200.000,00
- 2º Premio 77589 R$ 8.000,00
- 3º Premio 60325 R$ 5.000,00
- 4º Premio 03547 R$ 4.000,00
- 5º Premio 48642 R$ 2.000,00
+Can't Access `file://localhost/tmp/zz.loteria.federal.htm'
+Alert!: Unable to access document.
+
+lynx: Can't access startfile
--------------------------------------------------------------------------------
.
--------------------------------------------------------------------------------
[FAILED #12, line 115] zzloteria duplasena 1000
@@ -1,19 +1,8 @@
duplasena:
-Concurso 1000 (06/09/2011)
+mv: cannot stat ‘/tmp/D_DPLSEN.HTM’: No such file or directory
- 1º sorteio
- 07 09 18 42 45 46
+Can't Access `file://localhost/tmp/zz.loteria.duplasena.htm'
+Alert!: Unable to access document.
- 2º sorteio
- 03 13 31 32 36 39
-
- 1º Sorteio
- Sena Nao houve acertador
- Quina 55 R$ 2.224,60
- Quadra 2368 R$ 2368
-
- 2º Sorteio
- Sena 1 R$ 163.137,83
- Quina 77 R$ 1.589,00
- Quadra 2560 R$ 45,51
+lynx: Can't access startfile
--------------------------------------------------------------------------------
.
--------------------------------------------------------------------------------
[FAILED #13, line 137] zzloteria quina 3000
@@ -1,8 +1,8 @@
quina:
-Concurso 3000 (20/09/2012)
- 02 21 31 37 57
+mv: cannot stat ‘/tmp/D_QUINA.HTM’: No such file or directory
- Quina Nao houve acertador
- Quadra 54 R$ 6.681,90
- Terno 4528 R$ 113,83
+Can't Access `file://localhost/tmp/zz.loteria.quina.htm'
+Alert!: Unable to access document.
+
+lynx: Can't access startfile
--------------------------------------------------------------------------------
.
--------------------------------------------------------------------------------
[FAILED #14, line 148] zzloteria megasena 1111
@@ -1,8 +1,8 @@
megasena:
-Concurso 1111 (23/09/2009)
- 04 09 25 32 33 43
+mv: cannot stat ‘/tmp/d_megasc.htm’: No such file or directory
- Sena 1 R$ 2.100.928,15
- Quina 52 R$ 21.932,77
- Quadra 5266 R$ 309,39
+Can't Access `file://localhost/tmp/zz.loteria.megasena.htm'
+Alert!: Unable to access document.
+
+lynx: Can't access startfile
--------------------------------------------------------------------------------
.
--------------------------------------------------------------------------------
[FAILED #15, line 159] zzloteria lotomania 750
@@ -1,14 +1,8 @@
lotomania:
-Concurso 750 (18/08/2007)
- 04 07 19 22 24
- 30 38 41 45 53
- 57 66 68 72 75
- 77 80 83 92 94
+mv: cannot stat ‘/tmp/D_LOTMAN.HTM’: No such file or directory
- 20 pts. 5 R$ 1.364.031,89
- 19 pts. 66 R$ 7.504,25
- 18 pts. 595 R$ 832,40
- 17 pts. 4489 R$ 54,96
- 16 pts. 21493 R$ 11,48
- 0 pts. 3 R$ 82.546,71
+Can't Access `file://localhost/tmp/zz.loteria.lotomania.htm'
+Alert!: Unable to access document.
+
+lynx: Can't access startfile
--------------------------------------------------------------------------------
.
--------------------------------------------------------------------------------
[FAILED #16, line 176] zzloteria lotofacil 1099
@@ -1,12 +1,8 @@
lotofacil:
-Concurso 1099 (25/08/2014)
- 02 03 06 07 08
- 11 12 13 14 16
- 21 22 23 24 25
+mv: cannot stat ‘/tmp/D_LOTFAC.HTM’: No such file or directory
- 15 pts. Nao houve acertador!
- 14 pts. 445 R$ 1.278,83
- 13 pts. 13926 R$ 15,00
- 12 pts. 204099 R$ 6,00
- 11 pts. 1108572 R$ 3,00
+Can't Access `file://localhost/tmp/zz.loteria.lotofacil.htm'
+Alert!: Unable to access document.
+
+lynx: Can't access startfile
--------------------------------------------------------------------------------
FAIL: 9 of 16 tests failed
$
```
Rodando no host (sem Docker), dá problema apenas na duplasena:
```console
$ ./run zzloteria.sh
...
----------------------------------------------------------------------------------------------------------------------------------------------
[FAILED #3, line 13] zzloteria duplasena | tr '[0-9]' 'N' | sed '/^ *$/d;s/^ *//' | head -6
@@ -1,6 +1 @@
duplasena:
-Concurso NNNN (NN/NN/NNNN)
-Nº sorteio
-NN NN NN NN NN NN
-Nº sorteio
-NN NN NN NN NN NN
----------------------------------------------------------------------------------------------------------------------------------------------
.............
FAIL: 1 of 16 tests failed
$
```",1,docker zzloteria falhando problemas no tmp tentei mapeando o tmp do host e sem mapear também o resultado foi o mesmo o primeiro erro é a timemania mas esse parece ser não relacionado com os demais que são os testes de consulta de histórico dos sorteios e estão com algum problema ao acessar arquivos no tmp console docker run rm w app testador entrypoint run v pwd app v tmp tmp funcoeszz zzloteria sh zzloteria timemania tr n head timemania concurso nnn nn nn nnnn nn nn nn nn nn nn nn zzloteria loteca tr n head sed s col meio col n mv cannot stat ‘ tmp d loteca htm’ no such file or directory can t access file localhost tmp zz loteria loteca htm alert unable to access document lynx can t access startfile loteca concurso nnn nn nn nnnn jogo resultado n col n n col n n col n n col n n col n n col n n col n n col n n col n nn col n nn col n nn col n nn col n nn col n zzloteria timemania tr n head mv cannot stat ‘ tmp d timasc htm’ no such file or directory can t access file localhost tmp zz loteria timemania htm alert unable to access document lynx can t access startfile timemania concurso nnn nn nn nnnn nn nn nn nn nn nn nn zzloteria federal federal concurso mv cannot stat ‘ tmp d lotfed htm’ no such file or directory premio r premio r premio r premio r premio r can t access file localhost tmp zz loteria federal htm alert unable to access document lynx can t access startfile zzloteria duplasena duplasena concurso mv cannot stat ‘ tmp d dplsen htm’ no such file or directory sorteio can t access file localhost tmp zz loteria duplasena htm alert unable to access document sorteio sorteio sena nao houve acertador quina r quadra r sorteio sena r quina r quadra r lynx can t access startfile zzloteria quina quina concurso mv cannot stat ‘ tmp d quina htm’ no such file or directory quina nao houve acertador quadra r terno r can t access file localhost tmp zz loteria quina htm alert unable to access document lynx can t access startfile zzloteria megasena megasena concurso mv cannot stat ‘ tmp d megasc htm’ no such file or directory sena r quina r quadra r can t access file localhost tmp zz loteria megasena htm alert unable to access document lynx can t access startfile zzloteria lotomania lotomania concurso mv cannot stat ‘ tmp d lotman htm’ no such file or directory pts r pts r pts r pts r pts r pts r can t access file localhost tmp zz loteria lotomania htm alert unable to access document lynx can t access startfile zzloteria lotofacil lotofacil concurso mv cannot stat ‘ tmp d lotfac htm’ no such file or directory pts nao houve acertador pts r pts r pts r pts r can t access file localhost tmp zz loteria lotofacil htm alert unable to access document lynx can t access startfile fail of tests failed rodando no host sem docker dá problema apenas na duplasena console run zzloteria sh zzloteria duplasena tr n sed d s head duplasena concurso nnnn nn nn nnnn nº sorteio nn nn nn nn nn nn nº sorteio nn nn nn nn nn nn fail of tests failed ,1
24204,12043999168.0,IssuesEvent,2020-04-14 13:22:35,hserv/coordinated-entry,https://api.github.com/repos/hserv/coordinated-entry,opened,make question creation not contingent on question group creation,enhancement survey_service,"Currently, one must create a ""dummy"" question group at a minimum, to create a survey question. We should make this an optional requirement.",1.0,"make question creation not contingent on question group creation - Currently, one must create a ""dummy"" question group at a minimum, to create a survey question. We should make this an optional requirement.",0,make question creation not contingent on question group creation currently one must create a dummy question group at a minimum to create a survey question we should make this an optional requirement ,0
1728,25232895992.0,IssuesEvent,2022-11-14 21:27:10,zephyrproject-rtos/zephyr,https://api.github.com/repos/zephyrproject-rtos/zephyr,opened,Regulate GNUism in Zephyr codebase,area: Toolchains area: Portability Meta,"## Preface
Zephyr codebase currently contains many ""GNUisms"" (GCC-specific constructs) in the assembly and C code because the majority of its developers use the GCC compiler and the upstream CI uses the Zephyr SDK toolchain, which is based on the GCC compiler, to test the submitted patches.
The extensive use of GNUisms throughout the Zephyr codebase creates various portability issues, especially for the proprietary compilers like IAR that strictly adhere to the ISO C standard and do not implement the GNU extensions.
In order to ensure that the Zephyr codebase remains portable and compatible with the compilers other than GCC, it is necessary to set guidelines on the GNU extension usage and regulate them at the project level.
## Tasks
### Toolchain WG
- [ ] Identify GNU extensions currently in use and evaluate their importance and alternatives.
- [ ] Set guidelines on the GNU extension usage (per extension).
* If a GNU extension is essential to Zephyr codebase and there is no standard alternative, the extension usage shall be unconditionally allowed (a notorious example of such is ""statement expression"").
* If a GNU extension is not essential but can be sufficiently valuable in improving code quality and/or performance, the extension usage shall be allowed, provided that a fallback mechanism is implemented (a notorious example of such is ""builtin functions"").
* If a GNU extension is not essential and an equivalent standard alternative is available, the extension usage shall be disallowed.
- [ ] Evaluate enforcement strategies (per extension).
- [ ] Implement enforcement strategies.",True,"Regulate GNUism in Zephyr codebase - ## Preface
Zephyr codebase currently contains many ""GNUisms"" (GCC-specific constructs) in the assembly and C code because the majority of its developers use the GCC compiler and the upstream CI uses the Zephyr SDK toolchain, which is based on the GCC compiler, to test the submitted patches.
The extensive use of GNUisms throughout the Zephyr codebase creates various portability issues, especially for the proprietary compilers like IAR that strictly adhere to the ISO C standard and do not implement the GNU extensions.
In order to ensure that the Zephyr codebase remains portable and compatible with the compilers other than GCC, it is necessary to set guidelines on the GNU extension usage and regulate them at the project level.
## Tasks
### Toolchain WG
- [ ] Identify GNU extensions currently in use and evaluate their importance and alternatives.
- [ ] Set guidelines on the GNU extension usage (per extension).
* If a GNU extension is essential to Zephyr codebase and there is no standard alternative, the extension usage shall be unconditionally allowed (a notorious example of such is ""statement expression"").
* If a GNU extension is not essential but can be sufficiently valuable in improving code quality and/or performance, the extension usage shall be allowed, provided that a fallback mechanism is implemented (a notorious example of such is ""builtin functions"").
* If a GNU extension is not essential and an equivalent standard alternative is available, the extension usage shall be disallowed.
- [ ] Evaluate enforcement strategies (per extension).
- [ ] Implement enforcement strategies.",1,regulate gnuism in zephyr codebase preface zephyr codebase currently contains many gnuisms gcc specific constructs in the assembly and c code because the majority of its developers use the gcc compiler and the upstream ci uses the zephyr sdk toolchain which is based on the gcc compiler to test the submitted patches the extensive use of gnuisms throughout the zephyr codebase creates various portability issues especially for the proprietary compilers like iar that strictly adhere to the iso c standard and do not implement the gnu extensions in order to ensure that the zephyr codebase remains portable and compatible with the compilers other than gcc it is necessary to set guidelines on the gnu extension usage and regulate them at the project level tasks toolchain wg identify gnu extensions currently in use and evaluate their importance and alternatives set guidelines on the gnu extension usage per extension if a gnu extension is essential to zephyr codebase and there is no standard alternative the extension usage shall be unconditionally allowed a notorious example of such is statement expression if a gnu extension is not essential but can be sufficiently valuable in improving code quality and or performance the extension usage shall be allowed provided that a fallback mechanism is implemented a notorious example of such is builtin functions if a gnu extension is not essential and an equivalent standard alternative is available the extension usage shall be disallowed evaluate enforcement strategies per extension implement enforcement strategies ,1
1504,22153226332.0,IssuesEvent,2022-06-03 19:17:18,apache/beam,https://api.github.com/repos/apache/beam,opened,"Spec out what mutations are allowed to a constructed model pipeline, particularly coders",portability P3 improvement beam-model,"Context: presume an SDK has constructed a pipeline or sub-pipeline, and sent it - as a model proto - to another party, which could be a runner or another SDK.
Question to be resolved: What mutations are allowed to this pipeline?
For example, depending on how an SDK harness is implemented, some coders (aka wire formats) can be swapped while leaving the language-level types compatible. For example, ""urn:beam:coder:varlong"" and ""urn:beam:coder:bigendianlong"". It may also be possible to add or remove added length prefixes in some situations.
What we mean by _coder_ is a wire format specification for a _stream_ of elements, specified by a `FunctionSpec` proto and its components coders (and so on recursively).
For many coders, if the encoding is not known to a party, then the boundaries of elements cannot be discerned. But there are lots of situations where boundaries need to be known without full decoding - particularly by runners, but also at some point for SDK-to-SDK transmission.
*Possibility 1*: insist that a coder...
```
Coder {
spec: FunctionSpec { urn: ""beam:coder:my_whatever_coder"" }
}
```
... is always allowed to be replaced by the same coder, wrapped with an added lengh prefix ...
```
Coder {
spec: FunctionSpec { urn: ""beam:coder:add_length_prefix"" }
component_coders: [
Coder
{
spec: FunctionSpec { urn: ""beam:coder:my_whatever_coder"" }
}
]
}
```
There is a responsibility that each SDK harness understand this coder and also be able to execute the same UDFs with the decoded values. This is already sort of implicit in how the Fn API produces ProcessBundleDescriptors, since a runner can never assume to understand SDK coders.
*Posibility 2*: allow optimization by indicating a way to determine element boundaries
It may be that even for a coder that cannot be understood, the element boundaries can be easily discerned. For example, if a coder _already_ puts a length prefix in a known format at the start of each element, you just need to pull that out. This means that for an unknown coder, you can save the computation and space of adding a length prefix. (if you can understand ""urn:beam:coder:add_length_prefix"" then that special case is already handled)
It might look something like this:
```
Coder {
spec: FunctionSpec { urn: ""beam:coder:my_whatever_coder"" }
also_decodes_as: Coder {
spec: FunctionSpec { urn: ""beam:coder:add_length_prefix"" }
component_coders: [
Coder:
{ urn: ""beam:coder:uninterpretable_bytes"" }
]
}
}
```
The extra coder in `also_decodes_as` must be completely wire-compatible and should always be compose of completely standardized coders, so element boundaries can always be ascertained. An annoyance here is the possibility for silly protos where this recurses. Since the main implementation we expect is a length prefix, it could just be a flag, or just a coder for the length prefix itself.
Imported from Jira [BEAM-3203](https://issues.apache.org/jira/browse/BEAM-3203). Original Jira may contain additional context.
Reported by: kenn.",True,"Spec out what mutations are allowed to a constructed model pipeline, particularly coders - Context: presume an SDK has constructed a pipeline or sub-pipeline, and sent it - as a model proto - to another party, which could be a runner or another SDK.
Question to be resolved: What mutations are allowed to this pipeline?
For example, depending on how an SDK harness is implemented, some coders (aka wire formats) can be swapped while leaving the language-level types compatible. For example, ""urn:beam:coder:varlong"" and ""urn:beam:coder:bigendianlong"". It may also be possible to add or remove added length prefixes in some situations.
What we mean by _coder_ is a wire format specification for a _stream_ of elements, specified by a `FunctionSpec` proto and its components coders (and so on recursively).
For many coders, if the encoding is not known to a party, then the boundaries of elements cannot be discerned. But there are lots of situations where boundaries need to be known without full decoding - particularly by runners, but also at some point for SDK-to-SDK transmission.
*Possibility 1*: insist that a coder...
```
Coder {
spec: FunctionSpec { urn: ""beam:coder:my_whatever_coder"" }
}
```
... is always allowed to be replaced by the same coder, wrapped with an added lengh prefix ...
```
Coder {
spec: FunctionSpec { urn: ""beam:coder:add_length_prefix"" }
component_coders: [
Coder
{
spec: FunctionSpec { urn: ""beam:coder:my_whatever_coder"" }
}
]
}
```
There is a responsibility that each SDK harness understand this coder and also be able to execute the same UDFs with the decoded values. This is already sort of implicit in how the Fn API produces ProcessBundleDescriptors, since a runner can never assume to understand SDK coders.
*Posibility 2*: allow optimization by indicating a way to determine element boundaries
It may be that even for a coder that cannot be understood, the element boundaries can be easily discerned. For example, if a coder _already_ puts a length prefix in a known format at the start of each element, you just need to pull that out. This means that for an unknown coder, you can save the computation and space of adding a length prefix. (if you can understand ""urn:beam:coder:add_length_prefix"" then that special case is already handled)
It might look something like this:
```
Coder {
spec: FunctionSpec { urn: ""beam:coder:my_whatever_coder"" }
also_decodes_as: Coder {
spec: FunctionSpec { urn: ""beam:coder:add_length_prefix"" }
component_coders: [
Coder:
{ urn: ""beam:coder:uninterpretable_bytes"" }
]
}
}
```
The extra coder in `also_decodes_as` must be completely wire-compatible and should always be compose of completely standardized coders, so element boundaries can always be ascertained. An annoyance here is the possibility for silly protos where this recurses. Since the main implementation we expect is a length prefix, it could just be a flag, or just a coder for the length prefix itself.
Imported from Jira [BEAM-3203](https://issues.apache.org/jira/browse/BEAM-3203). Original Jira may contain additional context.
Reported by: kenn.",1,spec out what mutations are allowed to a constructed model pipeline particularly coders context presume an sdk has constructed a pipeline or sub pipeline and sent it as a model proto to another party which could be a runner or another sdk question to be resolved what mutations are allowed to this pipeline for example depending on how an sdk harness is implemented some coders aka wire formats can be swapped while leaving the language level types compatible for example urn beam coder varlong and urn beam coder bigendianlong it may also be possible to add or remove added length prefixes in some situations what we mean by coder is a wire format specification for a stream of elements specified by a functionspec proto and its components coders and so on recursively for many coders if the encoding is not known to a party then the boundaries of elements cannot be discerned but there are lots of situations where boundaries need to be known without full decoding particularly by runners but also at some point for sdk to sdk transmission possibility insist that a coder coder spec functionspec urn beam coder my whatever coder is always allowed to be replaced by the same coder wrapped with an added lengh prefix coder spec functionspec urn beam coder add length prefix component coders coder spec functionspec urn beam coder my whatever coder there is a responsibility that each sdk harness understand this coder and also be able to execute the same udfs with the decoded values this is already sort of implicit in how the fn api produces processbundledescriptors since a runner can never assume to understand sdk coders posibility allow optimization by indicating a way to determine element boundaries it may be that even for a coder that cannot be understood the element boundaries can be easily discerned for example if a coder already puts a length prefix in a known format at the start of each element you just need to pull that out this means that for an unknown coder you can save the computation and space of adding a length prefix if you can understand urn beam coder add length prefix then that special case is already handled it might look something like this coder spec functionspec urn beam coder my whatever coder also decodes as coder spec functionspec urn beam coder add length prefix component coders coder urn beam coder uninterpretable bytes the extra coder in also decodes as must be completely wire compatible and should always be compose of completely standardized coders so element boundaries can always be ascertained an annoyance here is the possibility for silly protos where this recurses since the main implementation we expect is a length prefix it could just be a flag or just a coder for the length prefix itself imported from jira original jira may contain additional context reported by kenn ,1
1617,23332429204.0,IssuesEvent,2022-08-09 06:58:29,StormSurgeLive/asgs,https://api.github.com/repos/StormSurgeLive/asgs,closed,add `fetch` command to get commonly used repos,incremental improvement PR pending portability,"Getting and updating asgs-configs from `asgsh` should be a command away (all underneath are `git` commands),
* `fetch configs` -> wrapper around `git clone` of `asgs-configs` (run once)
* `fetch docs` -> wrapper around `git clone` of `asgs.wiki`
* `fetch storm-archive` -> ... `storm-archve`
* allow custom repo aliases to be configured
* all repos go into `$SCRIPTDIR/git`",True,"add `fetch` command to get commonly used repos - Getting and updating asgs-configs from `asgsh` should be a command away (all underneath are `git` commands),
* `fetch configs` -> wrapper around `git clone` of `asgs-configs` (run once)
* `fetch docs` -> wrapper around `git clone` of `asgs.wiki`
* `fetch storm-archive` -> ... `storm-archve`
* allow custom repo aliases to be configured
* all repos go into `$SCRIPTDIR/git`",1,add fetch command to get commonly used repos getting and updating asgs configs from asgsh should be a command away all underneath are git commands fetch configs wrapper around git clone of asgs configs run once fetch docs wrapper around git clone of asgs wiki fetch storm archive storm archve allow custom repo aliases to be configured all repos go into scriptdir git ,1
267690,28509170221.0,IssuesEvent,2023-04-19 01:41:19,dpteam/RK3188_TABLET,https://api.github.com/repos/dpteam/RK3188_TABLET,closed,CVE-2017-6951 (Medium) detected in linuxv3.0 - autoclosed,Mend: dependency security vulnerability,"## CVE-2017-6951 - Medium Severity Vulnerability
Vulnerable Library - linuxv3.0
Linux kernel source tree
Library home page: https://github.com/verygreen/linux.git
Found in HEAD commit: 0c501f5a0fd72c7b2ac82904235363bd44fd8f9e
Found in base branch: master
Vulnerable Source Files (0)
Vulnerability Details
The keyring_search_aux function in security/keys/keyring.c in the Linux kernel through 3.14.79 allows local users to cause a denial of service (NULL pointer dereference and OOPS) via a request_key system call for the ""dead"" type.
Publish Date: 2017-03-16
URL: CVE-2017-6951
CVSS 3 Score Details (5.5 )
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
For more information on CVSS3 Scores, click here .
Suggested Fix
Type: Upgrade version
Origin: http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-6951
Release Date: 2017-03-16
Fix Resolution: v4.11-rc8
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2017-6951 (Medium) detected in linuxv3.0 - autoclosed - ## CVE-2017-6951 - Medium Severity Vulnerability
Vulnerable Library - linuxv3.0
Linux kernel source tree
Library home page: https://github.com/verygreen/linux.git
Found in HEAD commit: 0c501f5a0fd72c7b2ac82904235363bd44fd8f9e
Found in base branch: master
Vulnerable Source Files (0)
Vulnerability Details
The keyring_search_aux function in security/keys/keyring.c in the Linux kernel through 3.14.79 allows local users to cause a denial of service (NULL pointer dereference and OOPS) via a request_key system call for the ""dead"" type.
Publish Date: 2017-03-16
URL: CVE-2017-6951
CVSS 3 Score Details (5.5 )
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
For more information on CVSS3 Scores, click here .
Suggested Fix
Type: Upgrade version
Origin: http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-6951
Release Date: 2017-03-16
Fix Resolution: v4.11-rc8
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve medium detected in autoclosed cve medium severity vulnerability vulnerable library linux kernel source tree library home page a href found in head commit a href found in base branch master vulnerable source files vulnerability details the keyring search aux function in security keys keyring c in the linux kernel through allows local users to cause a denial of service null pointer dereference and oops via a request key system call for the dead type publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend ,0
53131,13125276288.0,IssuesEvent,2020-08-06 06:18:16,junit-pioneer/junit-pioneer,https://api.github.com/repos/junit-pioneer/junit-pioneer,closed,Add Nexus credentials to Repository settings,⚙️ component: Pioneer 🏗️ type: task 📖 theme: build 🛸 3rd-party: Github Actions,"Our release build needs some credentials to be working correctly. It seems like they are not set up correctly, so we need to add them, or update them.
We do need 4 Properties: https://github.com/junit-pioneer/junit-pioneer/blob/master/.github/workflows/releaseTrigger.yml#L38-L41
- NEXUS_TOKEN_PASSWORD
- NEXUS_TOKEN_USERNAME
and they need to be added here:

",1.0,"Add Nexus credentials to Repository settings - Our release build needs some credentials to be working correctly. It seems like they are not set up correctly, so we need to add them, or update them.
We do need 4 Properties: https://github.com/junit-pioneer/junit-pioneer/blob/master/.github/workflows/releaseTrigger.yml#L38-L41
- NEXUS_TOKEN_PASSWORD
- NEXUS_TOKEN_USERNAME
and they need to be added here:

",0,add nexus credentials to repository settings our release build needs some credentials to be working correctly it seems like they are not set up correctly so we need to add them or update them we do need properties nexus token password nexus token username and they need to be added here ,0
867,11451484170.0,IssuesEvent,2020-02-06 11:42:45,ToFuProject/tofu,https://api.github.com/repos/ToFuProject/tofu,opened,Deploy bin via setup.py,enhancement portability,"tofu typically delivers at leawt 2 bash scripts to be executable driectly from the command line:
* tofuplot : load data from imas and plot it in an interactive figure
* tofucalc : load data from imas and use it to calculate synthetic data for a diagnostic and plot it in an interactive figure
They can be used only when the sub-package imas2tofu is available (i.e.: only when imas is available), but in that case they greatly simplify calls to tofu for users not familiar with python. They do not allow as many options as calling tofu from the ipython console, but they provide the basics for an everyday use.
These 2 binaries shall be automatically deloyed by setup.py at tofu installation in order to be available straight from the command line.
We'll use the [tool ](https://python-packaging.readthedocs.io/en/latest/command-line-scripts.html) that @lasofivec found recently (first option with script keyword arg because it allows for non-python scripts). ",True,"Deploy bin via setup.py - tofu typically delivers at leawt 2 bash scripts to be executable driectly from the command line:
* tofuplot : load data from imas and plot it in an interactive figure
* tofucalc : load data from imas and use it to calculate synthetic data for a diagnostic and plot it in an interactive figure
They can be used only when the sub-package imas2tofu is available (i.e.: only when imas is available), but in that case they greatly simplify calls to tofu for users not familiar with python. They do not allow as many options as calling tofu from the ipython console, but they provide the basics for an everyday use.
These 2 binaries shall be automatically deloyed by setup.py at tofu installation in order to be available straight from the command line.
We'll use the [tool ](https://python-packaging.readthedocs.io/en/latest/command-line-scripts.html) that @lasofivec found recently (first option with script keyword arg because it allows for non-python scripts). ",1,deploy bin via setup py tofu typically delivers at leawt bash scripts to be executable driectly from the command line tofuplot load data from imas and plot it in an interactive figure tofucalc load data from imas and use it to calculate synthetic data for a diagnostic and plot it in an interactive figure they can be used only when the sub package is available i e only when imas is available but in that case they greatly simplify calls to tofu for users not familiar with python they do not allow as many options as calling tofu from the ipython console but they provide the basics for an everyday use these binaries shall be automatically deloyed by setup py at tofu installation in order to be available straight from the command line we ll use the that lasofivec found recently first option with script keyword arg because it allows for non python scripts ,1
539,7605482053.0,IssuesEvent,2018-04-30 09:11:23,arangodb/arangodb,https://api.github.com/repos/arangodb/arangodb,closed,"After upgrade to 3.3.4 : ""WARNING Failed to update Foxx store: Error: Failed to update Foxx store"" messages in arangod.log",1 Bug 2 Fixed 3 UI supportability,"## my environment running ArangoDB
I'm using the latest ArangoDB of the respective release series:
- [ ] 2.8
- [ ] 3.0
- [ ] 3.1
- [ ] 3.2
- [v ] 3.3
- [ ] self-compiled devel branch
Mode:
- [ ] Cluster
- [v ] Single-Server
Storage-Engine:
- [ ] mmfiles
- [v ] rocksdb
On this operating system:
- [ ] DCOS on
- [ ] AWS
- [ ] Azure
- [ ] own infrastructure
- [ ] Linux
- [ ] Debian .deb
- [ ] Ubuntu .deb
- [ ] SUSE .rpm
- [v ] RedHat .rpm
- [ ] Fedora .rpm
- [ ] Gentoo
- [ ] docker - official docker library
- [ ] other:
- [ ] Windows, version:
- [ ] MacOS, version:
### this is an AQL-related issue:
[ ] I'm using graph features
I'm issuing AQL via:
- [ ] web interface with this browser: running on this OS:
- [ ] arangosh
- [ ] this Driver:
I've run `db._explain("""")` and it didn't shed more light on this.
The AQL query in question is:
The issue can be reproduced using this dataset:
Please provide a way to create the dataset to run the above query on; either by a gist with an arangodump, or `db.collection.save({my: ""values""}) statements. If it can be reproduced with one of the ArangoDB example datasets, it's a plus.
### Foxx
Greetings,
I've upgraded Arango RPM from 3.3.3 to 3.3.4.
Now, when using the Browser Interface - every time i enter into any DB, the following messages appear in the arangod.log file :
2018-03-15T12:33:05 [29317] WARNING Failed to update Foxx store: Error: Failed to update Foxx store
2018-03-15T12:33:05 [29317] WARNING at Object.update (/usr/share/arangodb3/js/common/modules/@arangodb/foxx/store.js:354:7)
2018-03-15T12:33:05 [29317] WARNING at Route._handler (/usr/share/arangodb3/js/apps/system/_admin/aardvark/APP/foxxes.js:363:11)
2018-03-15T12:33:05 [29317] WARNING at next (/usr/share/arangodb3/js/server/modules/@arangodb/foxx/router/tree.js:412:15)
2018-03-15T12:33:05 [29317] WARNING at /usr/share/arangodb3/js/node/node_modules/lodash/lodash.js:10029:25
2018-03-15T12:33:05 [29317] WARNING at Middleware.router.use (/usr/share/arangodb3/js/apps/system/_admin/aardvark/APP/foxxes.js:54:3)
2018-03-15T12:33:05 [29317] WARNING at next (/usr/share/arangodb3/js/server/modules/@arangodb/foxx/router/tree.js:414:15)
2018-03-15T12:33:05 [29317] WARNING at next (/usr/share/arangodb3/js/server/modules/@arangodb/foxx/router/tree.js:410:7)
2018-03-15T12:33:05 [29317] WARNING at next (/usr/share/arangodb3/js/server/modules/@arangodb/foxx/router/tree.js:410:7)
2018-03-15T12:33:05 [29317] WARNING at next (/usr/share/arangodb3/js/server/modules/@arangodb/foxx/router/tree.js:410:7)
2018-03-15T12:33:05 [29317] WARNING at dispatch (/usr/share/arangodb3/js/server/modules/@arangodb/foxx/router/tree.js:426:3)
I've upgraded arango from 3.3.3 to 3.3.4 at several hosts - and this behavior happens at all of them.
Again, those messages appear after i enter into any DB (""_system"" or other/domestic) from Web Browser Interface.
Are those messages harmful or they can be ignored ?
Best regards and looking forward your assistance,
Avi Vainshtein
### this is a web interface-related issue:
I'm using the web interface with this browser: running on this OS:
- [ ] authentication is enabled?
- [ ] using the cluster?
- [ ] _system database?
These are the steps to reproduce:
1) open the browser on http://127.0.0.1:8529
2) log in as ...
3) use database [ ] `_system` [ ] other:
4) click to ...
...
The following problem occurs: [Screenshot?]
Instead I would be expecting:
### this is an installation-related issue:
Describe which steps you carried out, what you expected to happen and what actually happened.
",True,"After upgrade to 3.3.4 : ""WARNING Failed to update Foxx store: Error: Failed to update Foxx store"" messages in arangod.log - ## my environment running ArangoDB
I'm using the latest ArangoDB of the respective release series:
- [ ] 2.8
- [ ] 3.0
- [ ] 3.1
- [ ] 3.2
- [v ] 3.3
- [ ] self-compiled devel branch
Mode:
- [ ] Cluster
- [v ] Single-Server
Storage-Engine:
- [ ] mmfiles
- [v ] rocksdb
On this operating system:
- [ ] DCOS on
- [ ] AWS
- [ ] Azure
- [ ] own infrastructure
- [ ] Linux
- [ ] Debian .deb
- [ ] Ubuntu .deb
- [ ] SUSE .rpm
- [v ] RedHat .rpm
- [ ] Fedora .rpm
- [ ] Gentoo
- [ ] docker - official docker library
- [ ] other:
- [ ] Windows, version:
- [ ] MacOS, version:
### this is an AQL-related issue:
[ ] I'm using graph features
I'm issuing AQL via:
- [ ] web interface with this browser: running on this OS:
- [ ] arangosh
- [ ] this Driver:
I've run `db._explain("""")` and it didn't shed more light on this.
The AQL query in question is:
The issue can be reproduced using this dataset:
Please provide a way to create the dataset to run the above query on; either by a gist with an arangodump, or `db.collection.save({my: ""values""}) statements. If it can be reproduced with one of the ArangoDB example datasets, it's a plus.
### Foxx
Greetings,
I've upgraded Arango RPM from 3.3.3 to 3.3.4.
Now, when using the Browser Interface - every time i enter into any DB, the following messages appear in the arangod.log file :
2018-03-15T12:33:05 [29317] WARNING Failed to update Foxx store: Error: Failed to update Foxx store
2018-03-15T12:33:05 [29317] WARNING at Object.update (/usr/share/arangodb3/js/common/modules/@arangodb/foxx/store.js:354:7)
2018-03-15T12:33:05 [29317] WARNING at Route._handler (/usr/share/arangodb3/js/apps/system/_admin/aardvark/APP/foxxes.js:363:11)
2018-03-15T12:33:05 [29317] WARNING at next (/usr/share/arangodb3/js/server/modules/@arangodb/foxx/router/tree.js:412:15)
2018-03-15T12:33:05 [29317] WARNING at /usr/share/arangodb3/js/node/node_modules/lodash/lodash.js:10029:25
2018-03-15T12:33:05 [29317] WARNING at Middleware.router.use (/usr/share/arangodb3/js/apps/system/_admin/aardvark/APP/foxxes.js:54:3)
2018-03-15T12:33:05 [29317] WARNING at next (/usr/share/arangodb3/js/server/modules/@arangodb/foxx/router/tree.js:414:15)
2018-03-15T12:33:05 [29317] WARNING at next (/usr/share/arangodb3/js/server/modules/@arangodb/foxx/router/tree.js:410:7)
2018-03-15T12:33:05 [29317] WARNING at next (/usr/share/arangodb3/js/server/modules/@arangodb/foxx/router/tree.js:410:7)
2018-03-15T12:33:05 [29317] WARNING at next (/usr/share/arangodb3/js/server/modules/@arangodb/foxx/router/tree.js:410:7)
2018-03-15T12:33:05 [29317] WARNING at dispatch (/usr/share/arangodb3/js/server/modules/@arangodb/foxx/router/tree.js:426:3)
I've upgraded arango from 3.3.3 to 3.3.4 at several hosts - and this behavior happens at all of them.
Again, those messages appear after i enter into any DB (""_system"" or other/domestic) from Web Browser Interface.
Are those messages harmful or they can be ignored ?
Best regards and looking forward your assistance,
Avi Vainshtein
### this is a web interface-related issue:
I'm using the web interface with this browser: running on this OS:
- [ ] authentication is enabled?
- [ ] using the cluster?
- [ ] _system database?
These are the steps to reproduce:
1) open the browser on http://127.0.0.1:8529
2) log in as ...
3) use database [ ] `_system` [ ] other:
4) click to ...
...
The following problem occurs: [Screenshot?]
Instead I would be expecting:
### this is an installation-related issue:
Describe which steps you carried out, what you expected to happen and what actually happened.
",1,after upgrade to warning failed to update foxx store error failed to update foxx store messages in arangod log my environment running arangodb i m using the latest arangodb of the respective release series self compiled devel branch mode cluster single server storage engine mmfiles rocksdb on this operating system dcos on aws azure own infrastructure linux debian deb ubuntu deb suse rpm redhat rpm fedora rpm gentoo docker official docker library other windows version macos version this is an aql related issue i m using graph features i m issuing aql via web interface with this browser running on this os arangosh this driver i ve run db explain and it didn t shed more light on this the aql query in question is the issue can be reproduced using this dataset please provide a way to create the dataset to run the above query on either by a gist with an arangodump or db collection save my values statements if it can be reproduced with one of the arangodb example datasets it s a plus foxx greetings i ve upgraded arango rpm from to now when using the browser interface every time i enter into any db the following messages appear in the arangod log file warning failed to update foxx store error failed to update foxx store warning at object update usr share js common modules arangodb foxx store js warning at route handler usr share js apps system admin aardvark app foxxes js warning at next usr share js server modules arangodb foxx router tree js warning at usr share js node node modules lodash lodash js warning at middleware router use usr share js apps system admin aardvark app foxxes js warning at next usr share js server modules arangodb foxx router tree js warning at next usr share js server modules arangodb foxx router tree js warning at next usr share js server modules arangodb foxx router tree js warning at next usr share js server modules arangodb foxx router tree js warning at dispatch usr share js server modules arangodb foxx router tree js i ve upgraded arango from to at several hosts and this behavior happens at all of them again those messages appear after i enter into any db system or other domestic from web browser interface are those messages harmful or they can be ignored best regards and looking forward your assistance avi vainshtein this is a web interface related issue i m using the web interface with this browser running on this os authentication is enabled using the cluster system database these are the steps to reproduce open the browser on log in as use database system other click to the following problem occurs instead i would be expecting this is an installation related issue describe which steps you carried out what you expected to happen and what actually happened ,1
233,4732783243.0,IssuesEvent,2016-10-19 09:01:51,wahern/cqueues,https://api.github.com/repos/wahern/cqueues,closed,Compile error with Lua 5.3,packaging/portability,"gcc version 4.1.2 20080704 (Red Hat 4.1.2-55)
OS: Centos 5.11
````
# make
enabling Lua 5.3
mkdir -p /root/lua/cqueues/src/5.3
cc -O2 -std=gnu99 -fPIC -g -Wall -Wextra -Wno-missing-field-initializers -Wno-unused -DLUA_COMPAT_APIINTCASTS -D_REENTRANT -D_THREAD_SAFE -D_GNU_SOURCE -DCQUEUES_VENDOR='""william@25thandClement.com""' -DCQUEUES_VERSION=20160318L -DCQUEUES_COMMIT='""0dfba5d3505a3b03c358e01681f7d99686f9f802""' -c -o /root/lua/cqueues/src/5.3/cqueues.o /root/lua/cqueues/src/cqueues.c
In file included from /root/lua/cqueues/src/cqueues.c:51:
/root/lua/cqueues/src/cqueues.h: In function ‘cqs_setfd’:
/root/lua/cqueues/src/cqueues.h:437: error: ‘O_CLOEXEC’ undeclared (first use in this function)
/root/lua/cqueues/src/cqueues.h:437: error: (Each undeclared identifier is reported only once
/root/lua/cqueues/src/cqueues.h:437: error: for each function it appears in.)
/root/lua/cqueues/src/cqueues.h: In function ‘cqs_pipe’:
/root/lua/cqueues/src/cqueues.h:448: warning: implicit declaration of function ‘pipe2’
/root/lua/cqueues/src/cqueues.c: In function ‘alert_init’:
/root/lua/cqueues/src/cqueues.c:469: error: ‘O_CLOEXEC’ undeclared (first use in this function)
make: *** [/root/lua/cqueues/src/5.3/cqueues.o] Error 1
```",True,"Compile error with Lua 5.3 - gcc version 4.1.2 20080704 (Red Hat 4.1.2-55)
OS: Centos 5.11
````
# make
enabling Lua 5.3
mkdir -p /root/lua/cqueues/src/5.3
cc -O2 -std=gnu99 -fPIC -g -Wall -Wextra -Wno-missing-field-initializers -Wno-unused -DLUA_COMPAT_APIINTCASTS -D_REENTRANT -D_THREAD_SAFE -D_GNU_SOURCE -DCQUEUES_VENDOR='""william@25thandClement.com""' -DCQUEUES_VERSION=20160318L -DCQUEUES_COMMIT='""0dfba5d3505a3b03c358e01681f7d99686f9f802""' -c -o /root/lua/cqueues/src/5.3/cqueues.o /root/lua/cqueues/src/cqueues.c
In file included from /root/lua/cqueues/src/cqueues.c:51:
/root/lua/cqueues/src/cqueues.h: In function ‘cqs_setfd’:
/root/lua/cqueues/src/cqueues.h:437: error: ‘O_CLOEXEC’ undeclared (first use in this function)
/root/lua/cqueues/src/cqueues.h:437: error: (Each undeclared identifier is reported only once
/root/lua/cqueues/src/cqueues.h:437: error: for each function it appears in.)
/root/lua/cqueues/src/cqueues.h: In function ‘cqs_pipe’:
/root/lua/cqueues/src/cqueues.h:448: warning: implicit declaration of function ‘pipe2’
/root/lua/cqueues/src/cqueues.c: In function ‘alert_init’:
/root/lua/cqueues/src/cqueues.c:469: error: ‘O_CLOEXEC’ undeclared (first use in this function)
make: *** [/root/lua/cqueues/src/5.3/cqueues.o] Error 1
```",1,compile error with lua gcc version red hat os centos make enabling lua mkdir p root lua cqueues src cc std fpic g wall wextra wno missing field initializers wno unused dlua compat apiintcasts d reentrant d thread safe d gnu source dcqueues vendor william com dcqueues version dcqueues commit c o root lua cqueues src cqueues o root lua cqueues src cqueues c in file included from root lua cqueues src cqueues c root lua cqueues src cqueues h in function ‘cqs setfd’ root lua cqueues src cqueues h error ‘o cloexec’ undeclared first use in this function root lua cqueues src cqueues h error each undeclared identifier is reported only once root lua cqueues src cqueues h error for each function it appears in root lua cqueues src cqueues h in function ‘cqs pipe’ root lua cqueues src cqueues h warning implicit declaration of function ‘ ’ root lua cqueues src cqueues c in function ‘alert init’ root lua cqueues src cqueues c error ‘o cloexec’ undeclared first use in this function make error ,1
707,9601779289.0,IssuesEvent,2019-05-10 13:04:45,magnumripper/JohnTheRipper,https://api.github.com/repos/magnumripper/JohnTheRipper,closed,error: 'CL_KERNEL_PREFERRED_WORK_GROUP_SIZE_MULTIPLE' undeclared (first use in this function),portability,"Hi Guys,
Any ideas of this error.
Ive been trying for an hour to fix this.
Cheers
make find_version
make[1]: Entering directory '/home/test/newjohn/JohnTheRipper/src'
echo ""#define JTR_GIT_VERSION JUMBO_VERSION ""\""-c771966\"" \"" 2019-05-10 00:59:38 +0200\"""""" > version.h.new
diff >/dev/null 2>/dev/null version.h.new version.h && rm -f version.h.new || mv -f version.h.new version.h
make[1]: Leaving directory '/home/test/newjohn/JohnTheRipper/src'
make[1]: Entering directory '/home/test/newjohn/JohnTheRipper/src'
echo ""#define JTR_GIT_VERSION JUMBO_VERSION ""\""-c771966\"" \"" 2019-05-10 00:59:38 +0200\"""""" > version.h.new
diff >/dev/null 2>/dev/null version.h.new version.h && rm -f version.h.new || mv -f version.h.new version.h
gcc -DAC_BUILT -mavx2 -DJOHN_AVX2 -c -m64 -g -O2 -I/usr/local/include -DARCH_LITTLE_ENDIAN=1 -Wall -fno-omit-frame-pointer --param allow-store-data-races=0 -Wno-deprecated-declarations -Wformat-extra-args -Wunused-but-set-variable -std=gnu89 -Wdate-time -D_POSIX_SOURCE -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -fopenmp -pthread -I/usr/local/include -DCL_SILENCE_DEPRECATION -DHAVE_OPENCL -pthread -funroll-loops opencl_common.c -o opencl_common.o
In file included from opencl_common.c:50:0:
opencl_common.c: In function 'get_kernel_preferred_multiple':
opencl_common.c:2536:3: error: 'CL_KERNEL_PREFERRED_WORK_GROUP_SIZE_MULTIPLE' undeclared (first use in this function)
CL_KERNEL_PREFERRED_WORK_GROUP_SIZE_MULTIPLE,
^
opencl_common.h:260:23: note: in definition of macro 'HANDLE_CLERROR'
do { cl_int __err = (cl_error); \
^
opencl_common.c:2536:3: note: each undeclared identifier is reported only once for each function it appears in
CL_KERNEL_PREFERRED_WORK_GROUP_SIZE_MULTIPLE,
^
opencl_common.h:260:23: note: in definition of macro 'HANDLE_CLERROR'
do { cl_int __err = (cl_error); \
^
opencl_common.c: In function 'get_processors_count':
opencl_common.c:2623:34: error: 'CL_DEVICE_NATIVE_VECTOR_WIDTH_LONG' undeclared (first use in this function)
CL_DEVICE_NATIVE_VECTOR_WIDTH_LONG,
^
opencl_common.h:260:23: note: in definition of macro 'HANDLE_CLERROR'
do { cl_int __err = (cl_error); \
^
opencl_common.c: In function 'opencl_list_devices':
opencl_common.c:3031:20: error: 'CL_DEVICE_NATIVE_VECTOR_WIDTH_CHAR' undeclared (first use in this function)
CL_DEVICE_NATIVE_VECTOR_WIDTH_CHAR,
^
opencl_common.c:3035:20: error: 'CL_DEVICE_NATIVE_VECTOR_WIDTH_SHORT' undeclared (first use in this function)
CL_DEVICE_NATIVE_VECTOR_WIDTH_SHORT,
^
opencl_common.c:3039:20: error: 'CL_DEVICE_NATIVE_VECTOR_WIDTH_INT' undeclared (first use in this function)
CL_DEVICE_NATIVE_VECTOR_WIDTH_INT,
^
opencl_common.c:3043:20: error: 'CL_DEVICE_NATIVE_VECTOR_WIDTH_LONG' undeclared (first use in this function)
CL_DEVICE_NATIVE_VECTOR_WIDTH_LONG,
^
Makefile:1866: recipe for target 'opencl_common.o' failed
make[1]: *** [opencl_common.o] Error 1
make[1]: Leaving directory '/home/test/newjohn/JohnTheRipper/src'
Makefile:189: recipe for target 'default' failed
make: *** [default] Error 2",True,"error: 'CL_KERNEL_PREFERRED_WORK_GROUP_SIZE_MULTIPLE' undeclared (first use in this function) - Hi Guys,
Any ideas of this error.
Ive been trying for an hour to fix this.
Cheers
make find_version
make[1]: Entering directory '/home/test/newjohn/JohnTheRipper/src'
echo ""#define JTR_GIT_VERSION JUMBO_VERSION ""\""-c771966\"" \"" 2019-05-10 00:59:38 +0200\"""""" > version.h.new
diff >/dev/null 2>/dev/null version.h.new version.h && rm -f version.h.new || mv -f version.h.new version.h
make[1]: Leaving directory '/home/test/newjohn/JohnTheRipper/src'
make[1]: Entering directory '/home/test/newjohn/JohnTheRipper/src'
echo ""#define JTR_GIT_VERSION JUMBO_VERSION ""\""-c771966\"" \"" 2019-05-10 00:59:38 +0200\"""""" > version.h.new
diff >/dev/null 2>/dev/null version.h.new version.h && rm -f version.h.new || mv -f version.h.new version.h
gcc -DAC_BUILT -mavx2 -DJOHN_AVX2 -c -m64 -g -O2 -I/usr/local/include -DARCH_LITTLE_ENDIAN=1 -Wall -fno-omit-frame-pointer --param allow-store-data-races=0 -Wno-deprecated-declarations -Wformat-extra-args -Wunused-but-set-variable -std=gnu89 -Wdate-time -D_POSIX_SOURCE -D_GNU_SOURCE -D_XOPEN_SOURCE=600 -fopenmp -pthread -I/usr/local/include -DCL_SILENCE_DEPRECATION -DHAVE_OPENCL -pthread -funroll-loops opencl_common.c -o opencl_common.o
In file included from opencl_common.c:50:0:
opencl_common.c: In function 'get_kernel_preferred_multiple':
opencl_common.c:2536:3: error: 'CL_KERNEL_PREFERRED_WORK_GROUP_SIZE_MULTIPLE' undeclared (first use in this function)
CL_KERNEL_PREFERRED_WORK_GROUP_SIZE_MULTIPLE,
^
opencl_common.h:260:23: note: in definition of macro 'HANDLE_CLERROR'
do { cl_int __err = (cl_error); \
^
opencl_common.c:2536:3: note: each undeclared identifier is reported only once for each function it appears in
CL_KERNEL_PREFERRED_WORK_GROUP_SIZE_MULTIPLE,
^
opencl_common.h:260:23: note: in definition of macro 'HANDLE_CLERROR'
do { cl_int __err = (cl_error); \
^
opencl_common.c: In function 'get_processors_count':
opencl_common.c:2623:34: error: 'CL_DEVICE_NATIVE_VECTOR_WIDTH_LONG' undeclared (first use in this function)
CL_DEVICE_NATIVE_VECTOR_WIDTH_LONG,
^
opencl_common.h:260:23: note: in definition of macro 'HANDLE_CLERROR'
do { cl_int __err = (cl_error); \
^
opencl_common.c: In function 'opencl_list_devices':
opencl_common.c:3031:20: error: 'CL_DEVICE_NATIVE_VECTOR_WIDTH_CHAR' undeclared (first use in this function)
CL_DEVICE_NATIVE_VECTOR_WIDTH_CHAR,
^
opencl_common.c:3035:20: error: 'CL_DEVICE_NATIVE_VECTOR_WIDTH_SHORT' undeclared (first use in this function)
CL_DEVICE_NATIVE_VECTOR_WIDTH_SHORT,
^
opencl_common.c:3039:20: error: 'CL_DEVICE_NATIVE_VECTOR_WIDTH_INT' undeclared (first use in this function)
CL_DEVICE_NATIVE_VECTOR_WIDTH_INT,
^
opencl_common.c:3043:20: error: 'CL_DEVICE_NATIVE_VECTOR_WIDTH_LONG' undeclared (first use in this function)
CL_DEVICE_NATIVE_VECTOR_WIDTH_LONG,
^
Makefile:1866: recipe for target 'opencl_common.o' failed
make[1]: *** [opencl_common.o] Error 1
make[1]: Leaving directory '/home/test/newjohn/JohnTheRipper/src'
Makefile:189: recipe for target 'default' failed
make: *** [default] Error 2",1,error cl kernel preferred work group size multiple undeclared first use in this function hi guys any ideas of this error ive been trying for an hour to fix this cheers make find version make entering directory home test newjohn johntheripper src echo define jtr git version jumbo version version h new diff dev null dev null version h new version h rm f version h new mv f version h new version h make leaving directory home test newjohn johntheripper src make entering directory home test newjohn johntheripper src echo define jtr git version jumbo version version h new diff dev null dev null version h new version h rm f version h new mv f version h new version h gcc dac built djohn c g i usr local include darch little endian wall fno omit frame pointer param allow store data races wno deprecated declarations wformat extra args wunused but set variable std wdate time d posix source d gnu source d xopen source fopenmp pthread i usr local include dcl silence deprecation dhave opencl pthread funroll loops opencl common c o opencl common o in file included from opencl common c opencl common c in function get kernel preferred multiple opencl common c error cl kernel preferred work group size multiple undeclared first use in this function cl kernel preferred work group size multiple opencl common h note in definition of macro handle clerror do cl int err cl error opencl common c note each undeclared identifier is reported only once for each function it appears in cl kernel preferred work group size multiple opencl common h note in definition of macro handle clerror do cl int err cl error opencl common c in function get processors count opencl common c error cl device native vector width long undeclared first use in this function cl device native vector width long opencl common h note in definition of macro handle clerror do cl int err cl error opencl common c in function opencl list devices opencl common c error cl device native vector width char undeclared first use in this function cl device native vector width char opencl common c error cl device native vector width short undeclared first use in this function cl device native vector width short opencl common c error cl device native vector width int undeclared first use in this function cl device native vector width int opencl common c error cl device native vector width long undeclared first use in this function cl device native vector width long makefile recipe for target opencl common o failed make error make leaving directory home test newjohn johntheripper src makefile recipe for target default failed make error ,1
901,11865138158.0,IssuesEvent,2020-03-25 23:27:03,Azure/azure-functions-host,https://api.github.com/repos/Azure/azure-functions-host,closed,Host process should not terminate on function timeout in OOP languages,P1 Supportability,"When running with an OOP language worker, function timeouts should not cause a host process restart. In those cases, we should be able to terminate and recycle the worker without bringing the host down, addressing a class of issues related to host process termination.",True,"Host process should not terminate on function timeout in OOP languages - When running with an OOP language worker, function timeouts should not cause a host process restart. In those cases, we should be able to terminate and recycle the worker without bringing the host down, addressing a class of issues related to host process termination.",1,host process should not terminate on function timeout in oop languages when running with an oop language worker function timeouts should not cause a host process restart in those cases we should be able to terminate and recycle the worker without bringing the host down addressing a class of issues related to host process termination ,1
32963,8981995016.0,IssuesEvent,2019-01-31 00:08:09,JordanMartinez/purescript-jordans-reference,https://api.github.com/repos/JordanMartinez/purescript-jordans-reference,opened,Include 'Solutions to Common Bower problems' in this repo,Build-Tools enhancement,"Include the work I did in purescript/documentation#237 but which wasn't included due to being 'out of the scope of project goals.'
> **Warning:** if you are exploring PureScript and a compiler release was made recently, read `Breaking Compiler Changes and Bower`
#### Solutions to Common Problems with Bower
At various times, you might encounter a problem with Bower due to its cache mechanism. When in doubt, run the following command and see if that fixes your issue:
```bash
# Deletes the 'bower_modules' and 'output' directories,
# ensuring that the next build will be completely fresh.
rm -rf bower_modules/ output/
```
# Breaking Compiler Changes and Bower
The following issue is happening less and less frequently, but still needs to be stated.
## Annoyance Defined
If a compiler release that includes breaking changes was released recently, it will take some time for libraries in the ecosystem to become compatible with that release. If you are using Bower as your dependency manager, it may try to install libraries that are and are not compatible with the new release, creating problems.
## Recommended Guidelines
In such circumstances, follow these guidelines to help find the correct version of a library:
- Go to Pursuit and look at the library's package page. Choose one of the library's versions and compare that version's publish date with the date of the compiler release. Those that occur after the compiler release are likely compatible with the new release.
- Since `purescript-prelude` is a dependency for most libraries, see which version of `purescript-prelude` the library uses. That should indicate whether it's compatible with a new compiler release or not.
- If all else fails, check the library's last few commit messages in its repository for any messages about updating to the new compiler release.
",1.0,"Include 'Solutions to Common Bower problems' in this repo - Include the work I did in purescript/documentation#237 but which wasn't included due to being 'out of the scope of project goals.'
> **Warning:** if you are exploring PureScript and a compiler release was made recently, read `Breaking Compiler Changes and Bower`
#### Solutions to Common Problems with Bower
At various times, you might encounter a problem with Bower due to its cache mechanism. When in doubt, run the following command and see if that fixes your issue:
```bash
# Deletes the 'bower_modules' and 'output' directories,
# ensuring that the next build will be completely fresh.
rm -rf bower_modules/ output/
```
# Breaking Compiler Changes and Bower
The following issue is happening less and less frequently, but still needs to be stated.
## Annoyance Defined
If a compiler release that includes breaking changes was released recently, it will take some time for libraries in the ecosystem to become compatible with that release. If you are using Bower as your dependency manager, it may try to install libraries that are and are not compatible with the new release, creating problems.
## Recommended Guidelines
In such circumstances, follow these guidelines to help find the correct version of a library:
- Go to Pursuit and look at the library's package page. Choose one of the library's versions and compare that version's publish date with the date of the compiler release. Those that occur after the compiler release are likely compatible with the new release.
- Since `purescript-prelude` is a dependency for most libraries, see which version of `purescript-prelude` the library uses. That should indicate whether it's compatible with a new compiler release or not.
- If all else fails, check the library's last few commit messages in its repository for any messages about updating to the new compiler release.
",0,include solutions to common bower problems in this repo include the work i did in purescript documentation but which wasn t included due to being out of the scope of project goals warning if you are exploring purescript and a compiler release was made recently read breaking compiler changes and bower solutions to common problems with bower at various times you might encounter a problem with bower due to its cache mechanism when in doubt run the following command and see if that fixes your issue bash deletes the bower modules and output directories ensuring that the next build will be completely fresh rm rf bower modules output breaking compiler changes and bower the following issue is happening less and less frequently but still needs to be stated annoyance defined if a compiler release that includes breaking changes was released recently it will take some time for libraries in the ecosystem to become compatible with that release if you are using bower as your dependency manager it may try to install libraries that are and are not compatible with the new release creating problems recommended guidelines in such circumstances follow these guidelines to help find the correct version of a library go to pursuit and look at the library s package page choose one of the library s versions and compare that version s publish date with the date of the compiler release those that occur after the compiler release are likely compatible with the new release since purescript prelude is a dependency for most libraries see which version of purescript prelude the library uses that should indicate whether it s compatible with a new compiler release or not if all else fails check the library s last few commit messages in its repository for any messages about updating to the new compiler release ,0
6866,6649567238.0,IssuesEvent,2017-09-28 13:40:25,commercetools/commercetools-sync-java,https://api.github.com/repos/commercetools/commercetools-sync-java,opened,JavaDoc Jar seems to not be uploaded to bintray,Infrastructure,"### Description
https://travis-ci.org/commercetools/commercetools-sync-java/builds/280844376
Seems that the JavaDoc Jar is not built still (as seen in the latest build) and hence the releases are still not making it to maven central: https://bintray.com/commercetools/maven/commercetools-sync-java#central
",1.0,"JavaDoc Jar seems to not be uploaded to bintray - ### Description
https://travis-ci.org/commercetools/commercetools-sync-java/builds/280844376
Seems that the JavaDoc Jar is not built still (as seen in the latest build) and hence the releases are still not making it to maven central: https://bintray.com/commercetools/maven/commercetools-sync-java#central
",0,javadoc jar seems to not be uploaded to bintray description seems that the javadoc jar is not built still as seen in the latest build and hence the releases are still not making it to maven central ,0
97102,10981259322.0,IssuesEvent,2019-11-30 20:22:51,adopted-ember-addons/ember-electron,https://api.github.com/repos/adopted-ember-addons/ember-electron,opened,Add testing docs,documentation,We should add some docs to let people know to write normal Ember tests and how we would recommend testing the electron parts.,1.0,Add testing docs - We should add some docs to let people know to write normal Ember tests and how we would recommend testing the electron parts.,0,add testing docs we should add some docs to let people know to write normal ember tests and how we would recommend testing the electron parts ,0
746,10083433623.0,IssuesEvent,2019-07-25 13:40:28,Azure/azure-webjobs-sdk,https://api.github.com/repos/Azure/azure-webjobs-sdk,closed,Queue Trigger Polling Instrumentation Request,3.x Supportability,"There have been a number of reports of the QueueTrigger stop polling and a restart of the Web Job is needed.
This is on QueueTrigger jobs written using Web Jobs 3.x.
E.G.
[Queue trigger stops picking up messages after connection drops and restores}(https://github.com/Azure/azure-webjobs-sdk/issues/2072)
[QueueTrigger stops triggering - how to troubleshoot?] (https://github.com/Azure/azure-webjobs-sdk/issues/2136)
This is referenced in general at [Instrument Polling Algorithms with Trace statements](https://github.com/Azure/azure-webjobs-sdk/issues/1956)
The main idea would be to show if we're still polling the Queue or if it has stopped for some reason.
Otherwise to troubleshoot this we would need more information.
1. Storage Account Logs for the Queue we are polling.
2. A Dump file of the Dotnet.exe process hosting the WebJob that is not responding.
",True,"Queue Trigger Polling Instrumentation Request - There have been a number of reports of the QueueTrigger stop polling and a restart of the Web Job is needed.
This is on QueueTrigger jobs written using Web Jobs 3.x.
E.G.
[Queue trigger stops picking up messages after connection drops and restores}(https://github.com/Azure/azure-webjobs-sdk/issues/2072)
[QueueTrigger stops triggering - how to troubleshoot?] (https://github.com/Azure/azure-webjobs-sdk/issues/2136)
This is referenced in general at [Instrument Polling Algorithms with Trace statements](https://github.com/Azure/azure-webjobs-sdk/issues/1956)
The main idea would be to show if we're still polling the Queue or if it has stopped for some reason.
Otherwise to troubleshoot this we would need more information.
1. Storage Account Logs for the Queue we are polling.
2. A Dump file of the Dotnet.exe process hosting the WebJob that is not responding.
",1,queue trigger polling instrumentation request there have been a number of reports of the queuetrigger stop polling and a restart of the web job is needed this is on queuetrigger jobs written using web jobs x e g queue trigger stops picking up messages after connection drops and restores this is referenced in general at the main idea would be to show if we re still polling the queue or if it has stopped for some reason otherwise to troubleshoot this we would need more information storage account logs for the queue we are polling a dump file of the dotnet exe process hosting the webjob that is not responding ,1
54171,29801282852.0,IssuesEvent,2023-06-16 08:18:05,JanusGraph/janusgraph,https://api.github.com/repos/JanusGraph/janusgraph,closed,Enable multiQuery optimization for valueMap step,kind/performance area/tinkerpop area/core,"We have JanusGraphPropertiesStep which extends PropertiesStep and supports multiQuery optimization, so that values() step can be optimized.
We can do something similar to PropertyMapStep so that multiQuery optimization can be applied to valueMap() step as well.
Requested by https://github.com/JanusGraph/janusgraph/discussions/2401",True,"Enable multiQuery optimization for valueMap step - We have JanusGraphPropertiesStep which extends PropertiesStep and supports multiQuery optimization, so that values() step can be optimized.
We can do something similar to PropertyMapStep so that multiQuery optimization can be applied to valueMap() step as well.
Requested by https://github.com/JanusGraph/janusgraph/discussions/2401",0,enable multiquery optimization for valuemap step we have janusgraphpropertiesstep which extends propertiesstep and supports multiquery optimization so that values step can be optimized we can do something similar to propertymapstep so that multiquery optimization can be applied to valuemap step as well requested by ,0
96099,19899770916.0,IssuesEvent,2022-01-25 06:07:51,flutter/website,https://api.github.com/repos/flutter/website,closed,"[PAGE ISSUE]: 'Write your first Flutter app, part 1'",p2-medium e0-minutes codelab,"### Page URL
https://docs.flutter.dev/get-started/codelab/
### Page source
https://github.com/flutter/website/tree/main/src/get-started/codelab.md
### Describe the problem
The page says the following:
> The main() method uses arrow (=>) notation. Use arrow notation for one-line functions or methods.
Yet the referenced code does not appear to actually use arrow notation.
### Expected fix
_No response_
### Additional context
_No response_",1.0,"[PAGE ISSUE]: 'Write your first Flutter app, part 1' - ### Page URL
https://docs.flutter.dev/get-started/codelab/
### Page source
https://github.com/flutter/website/tree/main/src/get-started/codelab.md
### Describe the problem
The page says the following:
> The main() method uses arrow (=>) notation. Use arrow notation for one-line functions or methods.
Yet the referenced code does not appear to actually use arrow notation.
### Expected fix
_No response_
### Additional context
_No response_",0, write your first flutter app part page url page source describe the problem the page says the following the main method uses arrow notation use arrow notation for one line functions or methods yet the referenced code does not appear to actually use arrow notation expected fix no response additional context no response ,0
287901,21676489913.0,IssuesEvent,2022-05-08 19:55:22,bounswe/bounswe2022group9,https://api.github.com/repos/bounswe/bounswe2022group9,closed,Combining Milestone Report,Documentation Priority: High In Progress,"TODO:
- [x] Everyone shall put their parts on the Milestone Report",1.0,"Combining Milestone Report - TODO:
- [x] Everyone shall put their parts on the Milestone Report",0,combining milestone report todo everyone shall put their parts on the milestone report,0
49624,6228788252.0,IssuesEvent,2017-07-11 00:52:02,autoboxer/MARE,https://api.github.com/repos/autoboxer/MARE,opened,redesign the All Events page,category - redesign development - css development - html development - node,This should match Anna's designs (possibly changing the 'more' arrow to fit the content pages that already exist to prevent a greater amount of rework.,1.0,redesign the All Events page - This should match Anna's designs (possibly changing the 'more' arrow to fit the content pages that already exist to prevent a greater amount of rework.,0,redesign the all events page this should match anna s designs possibly changing the more arrow to fit the content pages that already exist to prevent a greater amount of rework ,0
729805,25145678117.0,IssuesEvent,2022-11-10 05:01:30,open-telemetry/opentelemetry-collector-contrib,https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib,closed,[exporter/clickhouseexporter] Clickhouse export fail due to plog.LogRecordFlags unable to convert to UInt32,bug Stale priority:p2 exporter/clickhouse,"**Describe the bug**
A clear and concise description of what the bug is.
Clickhouse export fail because clickhouse-go is unable to convert plog.LogRecordFlags into UInt32
**Steps to reproduce**
If possible, provide a recipe for reproducing the error.
Configure clickhouse exporter with default values and send logs then the error should pop out.
**What did you expect to see?**
A clear and concise description of what you expected to see.
OpenTelemetry Collector should export to Clickhouse database.
**What did you see instead?**
A clear and concise description of what you saw instead.
```
2022-09-02T09:11:06.767Z info exporterhelper/queued_retry.go:427 Exporting failed. Will retry the request after interval. {""kind"": ""exporter"", ""data_type"": ""logs"", ""name"": ""clickhouse"", ""error"": ""ExecContext:clickhouse [AppendRow]: TraceFlags clickhouse [AppendRow]: converting plog.LogRecordFlags to UInt32 is unsupported"", ""interval"": ""21.589639508s""}
```
**What version did you use?**
Version: v0.59.0
**What config did you use?**
Config:
```yaml
extensions:
health_check:
receivers:
otlp:
grpc:
http:
processors:
batch:
timeout: 1s
exporters:
clickhouse:
dsn: clickhouse://default:password@localhost:9000/default
ttl_days: 3
timeout: 5s
logs_table_name: otel_logs
retry_on_failure:
enabled: true
initial_interval: 5s
max_interval: 30s
max_elapsed_time: 300s
logging:
logLevel: info
service:
pipelines:
logs:
receivers: [otlp]
processors: [batch]
exporters:
- clickhouse
- logging
extensions: [health_check]
```
**Environment**
Docker Image: otel/opentelemetry-collector-contrib:0.59.0
Platform: arm64
OS: Ubuntu 22.04
",1.0,"[exporter/clickhouseexporter] Clickhouse export fail due to plog.LogRecordFlags unable to convert to UInt32 - **Describe the bug**
A clear and concise description of what the bug is.
Clickhouse export fail because clickhouse-go is unable to convert plog.LogRecordFlags into UInt32
**Steps to reproduce**
If possible, provide a recipe for reproducing the error.
Configure clickhouse exporter with default values and send logs then the error should pop out.
**What did you expect to see?**
A clear and concise description of what you expected to see.
OpenTelemetry Collector should export to Clickhouse database.
**What did you see instead?**
A clear and concise description of what you saw instead.
```
2022-09-02T09:11:06.767Z info exporterhelper/queued_retry.go:427 Exporting failed. Will retry the request after interval. {""kind"": ""exporter"", ""data_type"": ""logs"", ""name"": ""clickhouse"", ""error"": ""ExecContext:clickhouse [AppendRow]: TraceFlags clickhouse [AppendRow]: converting plog.LogRecordFlags to UInt32 is unsupported"", ""interval"": ""21.589639508s""}
```
**What version did you use?**
Version: v0.59.0
**What config did you use?**
Config:
```yaml
extensions:
health_check:
receivers:
otlp:
grpc:
http:
processors:
batch:
timeout: 1s
exporters:
clickhouse:
dsn: clickhouse://default:password@localhost:9000/default
ttl_days: 3
timeout: 5s
logs_table_name: otel_logs
retry_on_failure:
enabled: true
initial_interval: 5s
max_interval: 30s
max_elapsed_time: 300s
logging:
logLevel: info
service:
pipelines:
logs:
receivers: [otlp]
processors: [batch]
exporters:
- clickhouse
- logging
extensions: [health_check]
```
**Environment**
Docker Image: otel/opentelemetry-collector-contrib:0.59.0
Platform: arm64
OS: Ubuntu 22.04
",0, clickhouse export fail due to plog logrecordflags unable to convert to describe the bug a clear and concise description of what the bug is clickhouse export fail because clickhouse go is unable to convert plog logrecordflags into steps to reproduce if possible provide a recipe for reproducing the error configure clickhouse exporter with default values and send logs then the error should pop out what did you expect to see a clear and concise description of what you expected to see opentelemetry collector should export to clickhouse database what did you see instead a clear and concise description of what you saw instead info exporterhelper queued retry go exporting failed will retry the request after interval kind exporter data type logs name clickhouse error execcontext clickhouse traceflags clickhouse converting plog logrecordflags to is unsupported interval what version did you use version what config did you use config yaml extensions health check receivers otlp grpc http processors batch timeout exporters clickhouse dsn clickhouse default password localhost default ttl days timeout logs table name otel logs retry on failure enabled true initial interval max interval max elapsed time logging loglevel info service pipelines logs receivers processors exporters clickhouse logging extensions environment docker image otel opentelemetry collector contrib platform os ubuntu ,0
52983,6668800387.0,IssuesEvent,2017-10-03 17:01:54,Esri/military-feature-toolbox,https://api.github.com/repos/Esri/military-feature-toolbox,closed,Transfer labels,4 - Done A-bug A-feature A-question B-high B-low B-moderate C-L C-M C-S C-XL C-XS E-as designed E-duplicate E-invalid E-no count E-non reproducible E-verified E-won't fix FT-Workflows G-Design G-Development G-Documentation G-Documentation Review G-Research G-Testing HP-Candidate HP-HotFix HP-Patch priority - Showstopper,"_From @lfunkhouser on October 3, 2017 16:48_
_From @lfunkhouser on October 2, 2017 15:24_
This issue is used to transfer issues to another repo
_Copied from original issue: Esri/solutions-grg-widget#119_
_Copied from original issue: Esri/solutions-geoevent-java#79_",2.0,"Transfer labels - _From @lfunkhouser on October 3, 2017 16:48_
_From @lfunkhouser on October 2, 2017 15:24_
This issue is used to transfer issues to another repo
_Copied from original issue: Esri/solutions-grg-widget#119_
_Copied from original issue: Esri/solutions-geoevent-java#79_",0,transfer labels from lfunkhouser on october from lfunkhouser on october this issue is used to transfer issues to another repo copied from original issue esri solutions grg widget copied from original issue esri solutions geoevent java ,0
1324,18285046427.0,IssuesEvent,2021-10-05 09:20:30,SAP/xsk,https://api.github.com/repos/SAP/xsk,opened,[Core] Log xssecurity processor errors using ProblemsFacade,core customer supportability,"Implement it for the **xssecurity** processor.
This will help for customer testing.
Using method `logProcessorsErrors()` in [class](https://github.com/SAP/xsk/blob/main/modules/engines/engine-commons/src/main/java/com/sap/xsk/utils/XSKCommonsUtils.java).
Related to: https://github.com/SAP/xsk/issues/465",True,"[Core] Log xssecurity processor errors using ProblemsFacade - Implement it for the **xssecurity** processor.
This will help for customer testing.
Using method `logProcessorsErrors()` in [class](https://github.com/SAP/xsk/blob/main/modules/engines/engine-commons/src/main/java/com/sap/xsk/utils/XSKCommonsUtils.java).
Related to: https://github.com/SAP/xsk/issues/465",1, log xssecurity processor errors using problemsfacade implement it for the xssecurity processor this will help for customer testing using method logprocessorserrors in related to ,1
39259,5064659089.0,IssuesEvent,2016-12-23 08:00:18,tor4kichi/Hohoema,https://api.github.com/repos/tor4kichi/Hohoema,opened,メニューを含めたベースレイアウトを修正,ui_design,"以下の画面タイプごとにレイアウトが適用されるように変更します。
* 中~大画面向け
* モバイル向け
* XInput向け
PCで操作中の場合にXInputレイアウトを使用しないオプションを利用可能にします。",1.0,"メニューを含めたベースレイアウトを修正 - 以下の画面タイプごとにレイアウトが適用されるように変更します。
* 中~大画面向け
* モバイル向け
* XInput向け
PCで操作中の場合にXInputレイアウトを使用しないオプションを利用可能にします。",0,メニューを含めたベースレイアウトを修正 以下の画面タイプごとにレイアウトが適用されるように変更します。 中~大画面向け モバイル向け xinput向け pcで操作中の場合にxinputレイアウトを使用しないオプションを利用可能にします。,0
700124,24047753356.0,IssuesEvent,2022-09-16 09:51:17,magento/magento2,https://api.github.com/repos/magento/magento2,closed,Wrong Tax calculation on Creditmemo (under certain circumstances),Issue: Cannot Reproduce Progress: PR in progress Priority: P1 Issue: ready for confirmation Issue: needs update,"
### Preconditions (*)
1. 2.4.3
2. Create Creditmemo in Backend
3. Sometimes they can not be displayed as there is an Exception in the template (see below)
### Steps to reproduce (*)
1. Hard to reproduce, anyhow: If you have mainly zero tax articles and create creditmemos they can not be displayed (Exception in the template), if the order with the same ID as the creditmemo contains articles with taxes.
### Expected result (*)
1. Creditmemos can all be displayed correctly
### Actual result (*)
1. Exception thrown in magento/module-sales/view/adminhtml/templates/order/totals/tax.phtml:62 (trying to access method on array)
---
Please provide [Severity](https://devdocs.magento.com/guides/v2.4/contributor-guide/contributing.html#backlog) assessment for the Issue as Reporter. This information will help during Confirmation and Issue triage processes.
- [ ] Severity: **S0** _- Affects critical data or functionality and leaves users without workaround._
- [x] Severity: **S1** _- Affects critical data or functionality and forces users to employ a workaround._
- [ ] Severity: **S2** _- Affects non-critical data or functionality and forces users to employ a workaround._
- [ ] Severity: **S3** _- Affects non-critical data or functionality and does not force users to employ a workaround._
- [ ] Severity: **S4** _- Affects aesthetics, professional look and feel, “quality” or “usability”._
",1.0,"Wrong Tax calculation on Creditmemo (under certain circumstances) -
### Preconditions (*)
1. 2.4.3
2. Create Creditmemo in Backend
3. Sometimes they can not be displayed as there is an Exception in the template (see below)
### Steps to reproduce (*)
1. Hard to reproduce, anyhow: If you have mainly zero tax articles and create creditmemos they can not be displayed (Exception in the template), if the order with the same ID as the creditmemo contains articles with taxes.
### Expected result (*)
1. Creditmemos can all be displayed correctly
### Actual result (*)
1. Exception thrown in magento/module-sales/view/adminhtml/templates/order/totals/tax.phtml:62 (trying to access method on array)
---
Please provide [Severity](https://devdocs.magento.com/guides/v2.4/contributor-guide/contributing.html#backlog) assessment for the Issue as Reporter. This information will help during Confirmation and Issue triage processes.
- [ ] Severity: **S0** _- Affects critical data or functionality and leaves users without workaround._
- [x] Severity: **S1** _- Affects critical data or functionality and forces users to employ a workaround._
- [ ] Severity: **S2** _- Affects non-critical data or functionality and forces users to employ a workaround._
- [ ] Severity: **S3** _- Affects non-critical data or functionality and does not force users to employ a workaround._
- [ ] Severity: **S4** _- Affects aesthetics, professional look and feel, “quality” or “usability”._
",0,wrong tax calculation on creditmemo under certain circumstances please review our guidelines before adding a new issue fields marked with are required please don t remove the template preconditions provide the exact magento version example and any important information on the environment where bug is reproducible create creditmemo in backend sometimes they can not be displayed as there is an exception in the template see below steps to reproduce important provide a set of clear steps to reproduce this bug we can not provide support without clear instructions on how to reproduce hard to reproduce anyhow if you have mainly zero tax articles and create creditmemos they can not be displayed exception in the template if the order with the same id as the creditmemo contains articles with taxes expected result creditmemos can all be displayed correctly actual result exception thrown in magento module sales view adminhtml templates order totals tax phtml trying to access method on array please provide assessment for the issue as reporter this information will help during confirmation and issue triage processes severity affects critical data or functionality and leaves users without workaround severity affects critical data or functionality and forces users to employ a workaround severity affects non critical data or functionality and forces users to employ a workaround severity affects non critical data or functionality and does not force users to employ a workaround severity affects aesthetics professional look and feel “quality” or “usability” ,0
491,7023027657.0,IssuesEvent,2017-12-22 13:36:11,magnumripper/JohnTheRipper,https://api.github.com/repos/magnumripper/JohnTheRipper,closed,JtR MinGW builds (seem to) have memory corruptions,bug portability RFC / discussion,"- It is possible to crack using a **native windows [1] build of JtR**.
- But, it is not possible to test it.
- It always fails with heap corruption or bad memory access [2].
### Steps to reproduce
- Build on Windows and run JtR using `--test` or `--test-full` options.
- Real cracking works (at least, I wasn't able to trigger an error).
### Workaround
- Build JtR with memdbg enabled (succeed even a `--test-full=0`).
### System configuration
```
Win> john --list=build-info
Version: 1.8.0-jumbo-1-5448-g17a7687
Build: mingw64 64-bit SSE2-ac
SIMD: SSE2, interleaving: MD4:3 MD5:3 SHA1:1 SHA256:1 SHA512:1
$JOHN is
Format interface version: 14
Max. number of reported tunable costs: 3
Rec file version: REC4
Charset file version: CHR3
CHARSET_MIN: 1 (0x01)
CHARSET_MAX: 255 (0xff)
CHARSET_LENGTH: 24
SALT_HASH_SIZE: 1048576
Max. Markov mode level: 400
Max. Markov mode password length: 30
gcc version: 5.3.0
Crypto library: OpenSSL
OpenSSL library version: 01000207f
OpenSSL 1.0.2g 1 Mar 2016
GMP library version: 6.1.0
File locking: NOT supported by this build - do not run concurrent sessions!
fseek(): fseeko64
ftell(): ftello64
fopen(): fopen64
memmem(): JtR internal
```
### Other information
Seems **NOT** related to:
- #2083
- #2190
****
* [1] I built JtR on someone's else HP Windows 7 notebook and I put JtR in a Windows CI service available online. I built and tested on i686 and x86_64.
* [2] 0xC0000374: STATUS_HEAP_CORRUPTION or 0xC0000005: ""memory access violation"".
****
BTW: printf definitions, e.g. ""LLu"", have problems on 32bits.",True,"JtR MinGW builds (seem to) have memory corruptions - - It is possible to crack using a **native windows [1] build of JtR**.
- But, it is not possible to test it.
- It always fails with heap corruption or bad memory access [2].
### Steps to reproduce
- Build on Windows and run JtR using `--test` or `--test-full` options.
- Real cracking works (at least, I wasn't able to trigger an error).
### Workaround
- Build JtR with memdbg enabled (succeed even a `--test-full=0`).
### System configuration
```
Win> john --list=build-info
Version: 1.8.0-jumbo-1-5448-g17a7687
Build: mingw64 64-bit SSE2-ac
SIMD: SSE2, interleaving: MD4:3 MD5:3 SHA1:1 SHA256:1 SHA512:1
$JOHN is
Format interface version: 14
Max. number of reported tunable costs: 3
Rec file version: REC4
Charset file version: CHR3
CHARSET_MIN: 1 (0x01)
CHARSET_MAX: 255 (0xff)
CHARSET_LENGTH: 24
SALT_HASH_SIZE: 1048576
Max. Markov mode level: 400
Max. Markov mode password length: 30
gcc version: 5.3.0
Crypto library: OpenSSL
OpenSSL library version: 01000207f
OpenSSL 1.0.2g 1 Mar 2016
GMP library version: 6.1.0
File locking: NOT supported by this build - do not run concurrent sessions!
fseek(): fseeko64
ftell(): ftello64
fopen(): fopen64
memmem(): JtR internal
```
### Other information
Seems **NOT** related to:
- #2083
- #2190
****
* [1] I built JtR on someone's else HP Windows 7 notebook and I put JtR in a Windows CI service available online. I built and tested on i686 and x86_64.
* [2] 0xC0000374: STATUS_HEAP_CORRUPTION or 0xC0000005: ""memory access violation"".
****
BTW: printf definitions, e.g. ""LLu"", have problems on 32bits.",1,jtr mingw builds seem to have memory corruptions it is possible to crack using a native windows build of jtr but it is not possible to test it it always fails with heap corruption or bad memory access steps to reproduce build on windows and run jtr using test or test full options real cracking works at least i wasn t able to trigger an error workaround build jtr with memdbg enabled succeed even a test full system configuration win john list build info version jumbo build bit ac simd interleaving john is format interface version max number of reported tunable costs rec file version charset file version charset min charset max charset length salt hash size max markov mode level max markov mode password length gcc version crypto library openssl openssl library version openssl mar gmp library version file locking not supported by this build do not run concurrent sessions fseek ftell fopen memmem jtr internal other information seems not related to i built jtr on someone s else hp windows notebook and i put jtr in a windows ci service available online i built and tested on and status heap corruption or memory access violation btw printf definitions e g llu have problems on ,1
523,7374251975.0,IssuesEvent,2018-03-13 19:44:53,nasa/multipath-tcp-tools,https://api.github.com/repos/nasa/multipath-tcp-tools,closed,Compilation issues,bug portability,"I had to do the following change through the project to make it compile:
```
- tcp_header_length = ((tcp_header->th_off & 0xf0 >> 4) * 4);
+ tcp_header_length = ((tcp_header->doff & 0xf0 >> 4) * 4);
```
It's probably because on your system you have ```__FAVOR_BSD``` set, because ```netinet/tcp.h``` does:
```
# ifdef __FAVOR_BSD
[...]
```
I think you could probably fix that issue by defining ```__FAVOR_BSD``` in the Makefile.",True,"Compilation issues - I had to do the following change through the project to make it compile:
```
- tcp_header_length = ((tcp_header->th_off & 0xf0 >> 4) * 4);
+ tcp_header_length = ((tcp_header->doff & 0xf0 >> 4) * 4);
```
It's probably because on your system you have ```__FAVOR_BSD``` set, because ```netinet/tcp.h``` does:
```
# ifdef __FAVOR_BSD
[...]
```
I think you could probably fix that issue by defining ```__FAVOR_BSD``` in the Makefile.",1,compilation issues i had to do the following change through the project to make it compile tcp header length tcp header th off tcp header length tcp header doff it s probably because on your system you have favor bsd set because netinet tcp h does ifdef favor bsd i think you could probably fix that issue by defining favor bsd in the makefile ,1
724445,24931089373.0,IssuesEvent,2022-10-31 11:41:10,hoffstadt/DearPyGui,https://api.github.com/repos/hoffstadt/DearPyGui,closed,"Window background color not working if ""modal=True""",priority: low state: pending type: bug,"## Version of Dear PyGui
Version: 1.40
Operating System: Win11
## My Issue/Question
If a window is set to be modal, the binded theme to set background color is not working.
## To Reproduce
Steps to reproduce the behavior:
1. Run the minimal example
2. Change modal=False
3. Run the minimal example again
## Expected behavior
Run the minimal example, the background color of About windows is not black.
Change modal to False in the windows setting.
Run again, and background color is now black.
## Standalone, minimal, complete and verifiable example
```python
import dearpygui.dearpygui as dpg
dpg.create_context()
dpg.create_viewport()
dpg.setup_dearpygui()
with dpg.theme() as themeWinBgBlack:
with dpg.theme_component(dpg.mvAll):
dpg.add_theme_color(dpg.mvThemeCol_WindowBg, [0,0,0])
with dpg.window(label=""About"", modal=True) as winMenuAbout:
dpg.add_text(""test"")
dpg.bind_item_theme(winMenuAbout, themeWinBgBlack)
dpg.show_viewport()
dpg.start_dearpygui()
dpg.destroy_context()
```
",1.0,"Window background color not working if ""modal=True"" - ## Version of Dear PyGui
Version: 1.40
Operating System: Win11
## My Issue/Question
If a window is set to be modal, the binded theme to set background color is not working.
## To Reproduce
Steps to reproduce the behavior:
1. Run the minimal example
2. Change modal=False
3. Run the minimal example again
## Expected behavior
Run the minimal example, the background color of About windows is not black.
Change modal to False in the windows setting.
Run again, and background color is now black.
## Standalone, minimal, complete and verifiable example
```python
import dearpygui.dearpygui as dpg
dpg.create_context()
dpg.create_viewport()
dpg.setup_dearpygui()
with dpg.theme() as themeWinBgBlack:
with dpg.theme_component(dpg.mvAll):
dpg.add_theme_color(dpg.mvThemeCol_WindowBg, [0,0,0])
with dpg.window(label=""About"", modal=True) as winMenuAbout:
dpg.add_text(""test"")
dpg.bind_item_theme(winMenuAbout, themeWinBgBlack)
dpg.show_viewport()
dpg.start_dearpygui()
dpg.destroy_context()
```
",0,window background color not working if modal true version of dear pygui version operating system my issue question if a window is set to be modal the binded theme to set background color is not working to reproduce steps to reproduce the behavior run the minimal example change modal false run the minimal example again expected behavior run the minimal example the background color of about windows is not black change modal to false in the windows setting run again and background color is now black standalone minimal complete and verifiable example python import dearpygui dearpygui as dpg dpg create context dpg create viewport dpg setup dearpygui with dpg theme as themewinbgblack with dpg theme component dpg mvall dpg add theme color dpg mvthemecol windowbg with dpg window label about modal true as winmenuabout dpg add text test dpg bind item theme winmenuabout themewinbgblack dpg show viewport dpg start dearpygui dpg destroy context ,0
83898,3644692607.0,IssuesEvent,2016-02-15 11:06:01,MinetestForFun/server-minetestforfun-skyblock,https://api.github.com/repos/MinetestForFun/server-minetestforfun-skyblock,closed,Protector bug,Modding ➤ BugFix Priority: High,"This only applies to the normal protector logo, not the blocks or the 3x protectors, just the normal protector logos(protector:protect2).
Logos currently **cannot** be removed. When attempting to dig them an ""Unknown Object"" texture appears next to it and the logo remains.
I copied this error from the server log right after i found this bug:
2016-02-12 21:41:41: ERROR[ServerThread]: LuaEntity name ""protector:display"" not defined
:small_orange_diamond:",1.0,"Protector bug - This only applies to the normal protector logo, not the blocks or the 3x protectors, just the normal protector logos(protector:protect2).
Logos currently **cannot** be removed. When attempting to dig them an ""Unknown Object"" texture appears next to it and the logo remains.
I copied this error from the server log right after i found this bug:
2016-02-12 21:41:41: ERROR[ServerThread]: LuaEntity name ""protector:display"" not defined
:small_orange_diamond:",0,protector bug this only applies to the normal protector logo not the blocks or the protectors just the normal protector logos protector logos currently cannot be removed when attempting to dig them an unknown object texture appears next to it and the logo remains i copied this error from the server log right after i found this bug error luaentity name protector display not defined small orange diamond ,0
232562,7661533622.0,IssuesEvent,2018-05-11 14:32:06,diwg/cf2,https://api.github.com/repos/diwg/cf2,closed,Suggestions for Types proposal,Medium Priority Types,"Daniel,
Here are my initial suggestions for the Types proposal. If you find they have merit, please craft the replacement language yourself as I am short on time this week:
1. List NC_UBYTE among the new atomic types
2. This statement
> enum types may be used, but only if they resolve to an atomic external type at the end.
needs elaboration. I think that netCDF4 library ensures that the base type of an enum must be an integer. If so, then the statement is redundant, since all integer types are valid base types. What does ""at the end"" add to the sentence? Since the base type must be an integer, there can be no recursive ENUM variables (yes?).
3. It's unclear why you narrow the proposal down to just adding ENUM, not VLEN, COMPOUND, or OPAQUE. This means the Types proposal is in effect a proposal to allow ENUM and STRING types (so should the name be changed?), because the integer types are covered by my CF1 proposal which languishes in limbo but will probably be adopted eventually (and does anyone recall another CF1 proposal to allow STRING?) . Please add rationale and/or explanation why the proposal is narrow, since adoptees will likely want to understand this proposal in the context of previous and potential future proposals related to types.",1.0,"Suggestions for Types proposal - Daniel,
Here are my initial suggestions for the Types proposal. If you find they have merit, please craft the replacement language yourself as I am short on time this week:
1. List NC_UBYTE among the new atomic types
2. This statement
> enum types may be used, but only if they resolve to an atomic external type at the end.
needs elaboration. I think that netCDF4 library ensures that the base type of an enum must be an integer. If so, then the statement is redundant, since all integer types are valid base types. What does ""at the end"" add to the sentence? Since the base type must be an integer, there can be no recursive ENUM variables (yes?).
3. It's unclear why you narrow the proposal down to just adding ENUM, not VLEN, COMPOUND, or OPAQUE. This means the Types proposal is in effect a proposal to allow ENUM and STRING types (so should the name be changed?), because the integer types are covered by my CF1 proposal which languishes in limbo but will probably be adopted eventually (and does anyone recall another CF1 proposal to allow STRING?) . Please add rationale and/or explanation why the proposal is narrow, since adoptees will likely want to understand this proposal in the context of previous and potential future proposals related to types.",0,suggestions for types proposal daniel here are my initial suggestions for the types proposal if you find they have merit please craft the replacement language yourself as i am short on time this week list nc ubyte among the new atomic types this statement enum types may be used but only if they resolve to an atomic external type at the end needs elaboration i think that library ensures that the base type of an enum must be an integer if so then the statement is redundant since all integer types are valid base types what does at the end add to the sentence since the base type must be an integer there can be no recursive enum variables yes it s unclear why you narrow the proposal down to just adding enum not vlen compound or opaque this means the types proposal is in effect a proposal to allow enum and string types so should the name be changed because the integer types are covered by my proposal which languishes in limbo but will probably be adopted eventually and does anyone recall another proposal to allow string please add rationale and or explanation why the proposal is narrow since adoptees will likely want to understand this proposal in the context of previous and potential future proposals related to types ,0
716,9634809787.0,IssuesEvent,2019-05-15 22:23:20,Azure/azure-functions-host,https://api.github.com/repos/Azure/azure-functions-host,closed,Shutdown functions host if there is no language worker,Supportability,"If starting language worker fails including retries, need to shutdown functions host. This is required to avoid invocation failures on a functions host without language worker.",True,"Shutdown functions host if there is no language worker - If starting language worker fails including retries, need to shutdown functions host. This is required to avoid invocation failures on a functions host without language worker.",1,shutdown functions host if there is no language worker if starting language worker fails including retries need to shutdown functions host this is required to avoid invocation failures on a functions host without language worker ,1
1989,32009241107.0,IssuesEvent,2023-09-21 16:47:36,thorvg/thorvg,https://api.github.com/repos/thorvg/thorvg,closed,cross compile on mcu,Ideas invalid portability,"I would like to port thorvg to my microcontroller board, which features a Cortex-M33 core and runs FreeRTOS. I've noticed that Thorvg relies on C++14, which isn't very friendly for microcontroller platforms due to its usage of certain C++ features like concurrency-related APIs. Do you happen to know if there are any plans to decouple these codes and transition to an operating system adaptation layer instead?",True,"cross compile on mcu - I would like to port thorvg to my microcontroller board, which features a Cortex-M33 core and runs FreeRTOS. I've noticed that Thorvg relies on C++14, which isn't very friendly for microcontroller platforms due to its usage of certain C++ features like concurrency-related APIs. Do you happen to know if there are any plans to decouple these codes and transition to an operating system adaptation layer instead?",1,cross compile on mcu i would like to port thorvg to my microcontroller board which features a cortex core and runs freertos i ve noticed that thorvg relies on c which isn t very friendly for microcontroller platforms due to its usage of certain c features like concurrency related apis do you happen to know if there are any plans to decouple these codes and transition to an operating system adaptation layer instead ,1
1799,26532591268.0,IssuesEvent,2023-01-19 13:35:13,microsoft/vscode,https://api.github.com/repos/microsoft/vscode,closed,Doing `code ` from integrated terminal in portable mode uses wrong user data,bug terminal portable-mode author-verification-requested,"Does this issue occur when all extensions are disabled?: No
- VS Code Version: 1.74.2
- OS Version: Arch with Linux v6.1.3
Steps to Reproduce:
1. Open VS Code in portable mode (VSCODE_PORTABLE env var set to a directory)
2. Open a file in the instance by typing `code ` in the integrated terminal.
3. VS Code opens a new window rather than opening the file as a tab, and the new window reads userdata from `~/.vscode` instead of ""$VSCODE_PORTABLE"".
Possible solution:
I investigated and noticed that the integrated terminal in VS Code reads the `VSCODE_PORTABLE` env var as unset even if it is set by the OS. As a workaround, I have added this to my config, and it works:
```
""terminal.integrated.env.linux"": { ""VSCODE_PORTABLE"": ""${env:VSCODE_PORTABLE}"" },
```
",True,"Doing `code ` from integrated terminal in portable mode uses wrong user data - Does this issue occur when all extensions are disabled?: No
- VS Code Version: 1.74.2
- OS Version: Arch with Linux v6.1.3
Steps to Reproduce:
1. Open VS Code in portable mode (VSCODE_PORTABLE env var set to a directory)
2. Open a file in the instance by typing `code ` in the integrated terminal.
3. VS Code opens a new window rather than opening the file as a tab, and the new window reads userdata from `~/.vscode` instead of ""$VSCODE_PORTABLE"".
Possible solution:
I investigated and noticed that the integrated terminal in VS Code reads the `VSCODE_PORTABLE` env var as unset even if it is set by the OS. As a workaround, I have added this to my config, and it works:
```
""terminal.integrated.env.linux"": { ""VSCODE_PORTABLE"": ""${env:VSCODE_PORTABLE}"" },
```
",1,doing code from integrated terminal in portable mode uses wrong user data does this issue occur when all extensions are disabled no report issue dialog can assist with this vs code version os version arch with linux steps to reproduce open vs code in portable mode vscode portable env var set to a directory open a file in the instance by typing code in the integrated terminal vs code opens a new window rather than opening the file as a tab and the new window reads userdata from vscode instead of vscode portable possible solution i investigated and noticed that the integrated terminal in vs code reads the vscode portable env var as unset even if it is set by the os as a workaround i have added this to my config and it works terminal integrated env linux vscode portable env vscode portable ,1
49524,12369550692.0,IssuesEvent,2020-05-18 15:25:07,tensorflow/tensorflow,https://api.github.com/repos/tensorflow/tensorflow,opened,Tensorflow containers are missing from Docker Hub,type:build/install,"Your docker pages point at
https://hub.docker.com/r/tensorflow/tensorflow
Today that returns ""404"" Oops! Page not found.",1.0,"Tensorflow containers are missing from Docker Hub - Your docker pages point at
https://hub.docker.com/r/tensorflow/tensorflow
Today that returns ""404"" Oops! Page not found.",0,tensorflow containers are missing from docker hub your docker pages point at today that returns oops page not found ,0
726074,24987140682.0,IssuesEvent,2022-11-02 15:50:31,bcgov/entity,https://api.github.com/repos/bcgov/entity,closed, Message Error when Registering a DBA / Oct 24 2022/ SP GP Business Registry,bug Priority1 ENTITY,"A product team ticket to resolved this Ops ticket - https://app.zenhub.com/workspaces/ops-60f8556e05d25b0011468870/issues/bcgov-registries/ops-support/1583
""I have had two citizens call regarding registering a DBA. They are getting a “no matches found” message when trying to put the corporation in. I’ve asked them to try the registration and business numbers as well as the name, nothing works.""",1.0," Message Error when Registering a DBA / Oct 24 2022/ SP GP Business Registry - A product team ticket to resolved this Ops ticket - https://app.zenhub.com/workspaces/ops-60f8556e05d25b0011468870/issues/bcgov-registries/ops-support/1583
""I have had two citizens call regarding registering a DBA. They are getting a “no matches found” message when trying to put the corporation in. I’ve asked them to try the registration and business numbers as well as the name, nothing works.""",0, message error when registering a dba oct sp gp business registry a product team ticket to resolved this ops ticket i have had two citizens call regarding registering a dba they are getting a “no matches found” message when trying to put the corporation in i’ve asked them to try the registration and business numbers as well as the name nothing works ,0
893,11790687605.0,IssuesEvent,2020-03-17 19:28:10,MicrosoftDocs/sql-docs,https://api.github.com/repos/MicrosoftDocs/sql-docs,closed,Sort Order on Non-Key columns in Index,Pri1 assigned-to-author doc-bug sql/prod supportability/tech,"Under ""Index Sort Order Design Guidelines"" section it was mentioned that SORT order for indexes can be created only on key columns. However, in the Adventure Works example provided, sort order was mentioned on ""RejectedQty"" which is not a key column in the table. Please check this and let me know if I am missing something.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 20a94499-0b07-2deb-46d6-102e6f1dedac
* Version Independent ID: 324ed645-ed52-6104-9ffd-2907ff315565
* Content: [SQL Server Index Architecture and Design Guide - SQL Server](https://docs.microsoft.com/en-us/sql/relational-databases/sql-server-index-design-guide?view=sql-server-ver15#General_Design)
* Content Source: [docs/relational-databases/sql-server-index-design-guide.md](https://github.com/MicrosoftDocs/sql-docs/blob/live/docs/relational-databases/sql-server-index-design-guide.md)
* Product: **sql**
* Technology: **supportability**
* GitHub Login: @rothja
* Microsoft Alias: **jroth**",True,"Sort Order on Non-Key columns in Index - Under ""Index Sort Order Design Guidelines"" section it was mentioned that SORT order for indexes can be created only on key columns. However, in the Adventure Works example provided, sort order was mentioned on ""RejectedQty"" which is not a key column in the table. Please check this and let me know if I am missing something.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 20a94499-0b07-2deb-46d6-102e6f1dedac
* Version Independent ID: 324ed645-ed52-6104-9ffd-2907ff315565
* Content: [SQL Server Index Architecture and Design Guide - SQL Server](https://docs.microsoft.com/en-us/sql/relational-databases/sql-server-index-design-guide?view=sql-server-ver15#General_Design)
* Content Source: [docs/relational-databases/sql-server-index-design-guide.md](https://github.com/MicrosoftDocs/sql-docs/blob/live/docs/relational-databases/sql-server-index-design-guide.md)
* Product: **sql**
* Technology: **supportability**
* GitHub Login: @rothja
* Microsoft Alias: **jroth**",1,sort order on non key columns in index under index sort order design guidelines section it was mentioned that sort order for indexes can be created only on key columns however in the adventure works example provided sort order was mentioned on rejectedqty which is not a key column in the table please check this and let me know if i am missing something document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product sql technology supportability github login rothja microsoft alias jroth ,1
1282,17147184919.0,IssuesEvent,2021-07-13 15:47:07,ocaml/opam,https://api.github.com/repos/ocaml/opam,closed,Can't upgrade to Opam 2.0 on Windows 10/Cygwin64,AREA: PORTABILITY,"I've been trying various ways to upgrade from 1.2.2 to 2.0 following the instructions at https://opam.ocaml.org/blog/opam-2-0-0/ without success. They all seem to fail:
1. `sh <(curl -sL https://raw.githubusercontent.com/ocaml/opam/master/shell/install.sh)` says to download the Opam source, then run `make cold`, which dies after a few thousand lines of output with:
```
'/cygdrive/c/opam/opam-2.0.3/bootstrap/ocaml/bin/ocamldep.opt' -modules src/which_program/which_program.boot.ml > boot-depends.txt
'/cygdrive/c/opam/opam-2.0.3/bootstrap/ocaml/bin/ocamldep.opt' -modules src/xdg/xdg.ml > boot-depends.txt
'/cygdrive/c/opam/opam-2.0.3/bootstrap/ocaml/bin/ocamllex.opt' -q src/let-syntax/lexer.mll
'/cygdrive/c/opam/opam-2.0.3/bootstrap/ocaml/bin/ocamlc.opt' -g -w -40 -o boot.exe unix.cma boot.ml
1 [main] ocamlrun 16520 child_info_fork::abort: address space needed by 'dllunix.so' (0x400000) is already occupied
Error: fork: : Resource temporarily unavailable
```
2. Similarly `opam update; opam install opam-devel` dies while installing jbuilder:
```
#=== ERROR while installing jbuilder.1.0+beta20 ===============================#
# opam-version 1.2.2
# os cygwin
# command ./boot.exe -j 4
# path /home/Jim/.opam/4.06.1/build/jbuilder.1.0+beta20
# compiler 4.06.1
# exit-code 1
# env-file /home/Jim/.opam/4.06.1/build/jbuilder.1.0+beta20/jbuilder-5052-e0259d.env
# stdout-file /home/Jim/.opam/4.06.1/build/jbuilder.1.0+beta20/jbuilder-5052-e0259d.out
# stderr-file /home/Jim/.opam/4.06.1/build/jbuilder.1.0+beta20/jbuilder-5052-e0259d.err
### stderr ###
# 1 [main] ocamlrun 5124 child_info_fork::abort: address space needed by 'dllunix.so' (0x420000) is already occupied
# Error: fork: : Resource temporarily unavailable
```
Rebooting doesn't help.
This issue appears similar to https://github.com/ocaml/opam/issues/3503, but I'm able to compile many, many files with OCaml.
I saw this suggestion in https://github.com/ocaml/opam/issues/2276 for Cygwin32. Perhaps there an equivalent for Cygwin64?
```
rebase -b 0x7cd20000 ./.opam/4.02.3/lib/ocaml/stublibs/dllunix.so
rebase -b 0x7cdc0000 ./.opam/4.02.3/lib/ocaml/stublibs/dllthreads.so
```
Should I update to the latest cygwin? Timestamps suggest I installed in on 11/24/17.
```
$ opam config report
# OPAM config report
# opam-version 1.2.2
# self-upgrade no
# os cygwin
# external-solver no
# criteria -removed,-notuptodate,-changed
# jobs 4
# repositories 1 (http)
# pinned 0
# current-switch 4.06.1
# last-update 2019-03-16 07:58
```
",True,"Can't upgrade to Opam 2.0 on Windows 10/Cygwin64 - I've been trying various ways to upgrade from 1.2.2 to 2.0 following the instructions at https://opam.ocaml.org/blog/opam-2-0-0/ without success. They all seem to fail:
1. `sh <(curl -sL https://raw.githubusercontent.com/ocaml/opam/master/shell/install.sh)` says to download the Opam source, then run `make cold`, which dies after a few thousand lines of output with:
```
'/cygdrive/c/opam/opam-2.0.3/bootstrap/ocaml/bin/ocamldep.opt' -modules src/which_program/which_program.boot.ml > boot-depends.txt
'/cygdrive/c/opam/opam-2.0.3/bootstrap/ocaml/bin/ocamldep.opt' -modules src/xdg/xdg.ml > boot-depends.txt
'/cygdrive/c/opam/opam-2.0.3/bootstrap/ocaml/bin/ocamllex.opt' -q src/let-syntax/lexer.mll
'/cygdrive/c/opam/opam-2.0.3/bootstrap/ocaml/bin/ocamlc.opt' -g -w -40 -o boot.exe unix.cma boot.ml
1 [main] ocamlrun 16520 child_info_fork::abort: address space needed by 'dllunix.so' (0x400000) is already occupied
Error: fork: : Resource temporarily unavailable
```
2. Similarly `opam update; opam install opam-devel` dies while installing jbuilder:
```
#=== ERROR while installing jbuilder.1.0+beta20 ===============================#
# opam-version 1.2.2
# os cygwin
# command ./boot.exe -j 4
# path /home/Jim/.opam/4.06.1/build/jbuilder.1.0+beta20
# compiler 4.06.1
# exit-code 1
# env-file /home/Jim/.opam/4.06.1/build/jbuilder.1.0+beta20/jbuilder-5052-e0259d.env
# stdout-file /home/Jim/.opam/4.06.1/build/jbuilder.1.0+beta20/jbuilder-5052-e0259d.out
# stderr-file /home/Jim/.opam/4.06.1/build/jbuilder.1.0+beta20/jbuilder-5052-e0259d.err
### stderr ###
# 1 [main] ocamlrun 5124 child_info_fork::abort: address space needed by 'dllunix.so' (0x420000) is already occupied
# Error: fork: : Resource temporarily unavailable
```
Rebooting doesn't help.
This issue appears similar to https://github.com/ocaml/opam/issues/3503, but I'm able to compile many, many files with OCaml.
I saw this suggestion in https://github.com/ocaml/opam/issues/2276 for Cygwin32. Perhaps there an equivalent for Cygwin64?
```
rebase -b 0x7cd20000 ./.opam/4.02.3/lib/ocaml/stublibs/dllunix.so
rebase -b 0x7cdc0000 ./.opam/4.02.3/lib/ocaml/stublibs/dllthreads.so
```
Should I update to the latest cygwin? Timestamps suggest I installed in on 11/24/17.
```
$ opam config report
# OPAM config report
# opam-version 1.2.2
# self-upgrade no
# os cygwin
# external-solver no
# criteria -removed,-notuptodate,-changed
# jobs 4
# repositories 1 (http)
# pinned 0
# current-switch 4.06.1
# last-update 2019-03-16 07:58
```
",1,can t upgrade to opam on windows i ve been trying various ways to upgrade from to following the instructions at without success they all seem to fail sh curl sl says to download the opam source then run make cold which dies after a few thousand lines of output with cygdrive c opam opam bootstrap ocaml bin ocamldep opt modules src which program which program boot ml boot depends txt cygdrive c opam opam bootstrap ocaml bin ocamldep opt modules src xdg xdg ml boot depends txt cygdrive c opam opam bootstrap ocaml bin ocamllex opt q src let syntax lexer mll cygdrive c opam opam bootstrap ocaml bin ocamlc opt g w o boot exe unix cma boot ml ocamlrun child info fork abort address space needed by dllunix so is already occupied error fork resource temporarily unavailable similarly opam update opam install opam devel dies while installing jbuilder error while installing jbuilder opam version os cygwin command boot exe j path home jim opam build jbuilder compiler exit code env file home jim opam build jbuilder jbuilder env stdout file home jim opam build jbuilder jbuilder out stderr file home jim opam build jbuilder jbuilder err stderr ocamlrun child info fork abort address space needed by dllunix so is already occupied error fork resource temporarily unavailable rebooting doesn t help this issue appears similar to but i m able to compile many many files with ocaml i saw this suggestion in for perhaps there an equivalent for rebase b opam lib ocaml stublibs dllunix so rebase b opam lib ocaml stublibs dllthreads so should i update to the latest cygwin timestamps suggest i installed in on opam config report opam config report opam version self upgrade no os cygwin external solver no criteria removed notuptodate changed jobs repositories http pinned current switch last update ,1
1361,19512302688.0,IssuesEvent,2021-12-29 01:50:42,lkrg-org/lkrg,https://api.github.com/repos/lkrg-org/lkrg,closed,Build on RHEL7 broken,portability,"Build on RHEL7 is broken in two ways:
1. ff5ca9630057e2c1d3856a8674afac9bf6a9b1d3 apparently requires GCC 7+, and RHEL7 by default has older.
2. c6d93ae9df2519a66a6d5ec9e226f8458c51815c assumes the kernel has `READ_ONCE`, but older kernels did not. They had `ACCESS_ONCE` instead: https://lwn.net/Articles/624126/
I'm testing fixes for these.",True,"Build on RHEL7 broken - Build on RHEL7 is broken in two ways:
1. ff5ca9630057e2c1d3856a8674afac9bf6a9b1d3 apparently requires GCC 7+, and RHEL7 by default has older.
2. c6d93ae9df2519a66a6d5ec9e226f8458c51815c assumes the kernel has `READ_ONCE`, but older kernels did not. They had `ACCESS_ONCE` instead: https://lwn.net/Articles/624126/
I'm testing fixes for these.",1,build on broken build on is broken in two ways apparently requires gcc and by default has older assumes the kernel has read once but older kernels did not they had access once instead i m testing fixes for these ,1
183386,21721724193.0,IssuesEvent,2022-05-11 01:20:15,raindigi/reaction,https://api.github.com/repos/raindigi/reaction,closed,CVE-2021-29060 (Medium) detected in color-string-1.5.3.tgz - autoclosed,security vulnerability,"## CVE-2021-29060 - Medium Severity Vulnerability
Vulnerable Library - color-string-1.5.3.tgz
Parser and generator for CSS color strings
Library home page: https://registry.npmjs.org/color-string/-/color-string-1.5.3.tgz
Path to dependency file: /package.json
Path to vulnerable library: /node_modules/color-string/package.json
Dependency Hierarchy:
- sharp-0.20.5.tgz (Root Library)
- color-3.1.0.tgz
- :x: **color-string-1.5.3.tgz** (Vulnerable Library)
Found in HEAD commit: c3f5e6b9d647cd1f977b184ae9c079f1ae297353
Vulnerability Details
A Regular Expression Denial of Service (ReDOS) vulnerability was discovered in Color-String version 1.5.5 and below which occurs when the application is provided and checks a crafted invalid HWB string.
Publish Date: 2021-06-21
URL: CVE-2021-29060
CVSS 3 Score Details (5.3 )
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
For more information on CVSS3 Scores, click here .
Suggested Fix
Type: Upgrade version
Origin: https://github.com/advisories/GHSA-257v-vj4p-3w2h
Release Date: 2021-06-21
Fix Resolution: color-string - 1.5.5
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2021-29060 (Medium) detected in color-string-1.5.3.tgz - autoclosed - ## CVE-2021-29060 - Medium Severity Vulnerability
Vulnerable Library - color-string-1.5.3.tgz
Parser and generator for CSS color strings
Library home page: https://registry.npmjs.org/color-string/-/color-string-1.5.3.tgz
Path to dependency file: /package.json
Path to vulnerable library: /node_modules/color-string/package.json
Dependency Hierarchy:
- sharp-0.20.5.tgz (Root Library)
- color-3.1.0.tgz
- :x: **color-string-1.5.3.tgz** (Vulnerable Library)
Found in HEAD commit: c3f5e6b9d647cd1f977b184ae9c079f1ae297353
Vulnerability Details
A Regular Expression Denial of Service (ReDOS) vulnerability was discovered in Color-String version 1.5.5 and below which occurs when the application is provided and checks a crafted invalid HWB string.
Publish Date: 2021-06-21
URL: CVE-2021-29060
CVSS 3 Score Details (5.3 )
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
For more information on CVSS3 Scores, click here .
Suggested Fix
Type: Upgrade version
Origin: https://github.com/advisories/GHSA-257v-vj4p-3w2h
Release Date: 2021-06-21
Fix Resolution: color-string - 1.5.5
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve medium detected in color string tgz autoclosed cve medium severity vulnerability vulnerable library color string tgz parser and generator for css color strings library home page a href path to dependency file package json path to vulnerable library node modules color string package json dependency hierarchy sharp tgz root library color tgz x color string tgz vulnerable library found in head commit a href vulnerability details a regular expression denial of service redos vulnerability was discovered in color string version and below which occurs when the application is provided and checks a crafted invalid hwb string publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution color string step up your open source security game with whitesource ,0
1608,23245070915.0,IssuesEvent,2022-08-03 19:17:05,MicrosoftDocs/azure-docs,https://api.github.com/repos/MicrosoftDocs/azure-docs,closed,Outdated process,azure-supportability/svc triaged assigned-to-author doc-bug Pri2,"The process is Azure portal is different from what is described here.
There is now a wizard-like thing for requesting increases for different vcpu quotas, and it seems total regional vcpu is automatically increased as well.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: e19f68c5-3917-c131-a379-3b9e3156593b
* Version Independent ID: 89f78479-edd5-f96c-b342-31e43ef72c92
* Content: [Request an increase in Azure regional vCPU quota limits](https://docs.microsoft.com/en-us/azure/azure-portal/supportability/regional-quota-requests#feedback)
* Content Source: [articles/azure-portal/supportability/regional-quota-requests.md](https://github.com/Microsoft/azure-docs/blob/master/articles/azure-portal/supportability/regional-quota-requests.md)
* Service: **azure-supportability**
* GitHub Login: @sowmyavenkat86
* Microsoft Alias: **svenkat**",True,"Outdated process - The process is Azure portal is different from what is described here.
There is now a wizard-like thing for requesting increases for different vcpu quotas, and it seems total regional vcpu is automatically increased as well.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: e19f68c5-3917-c131-a379-3b9e3156593b
* Version Independent ID: 89f78479-edd5-f96c-b342-31e43ef72c92
* Content: [Request an increase in Azure regional vCPU quota limits](https://docs.microsoft.com/en-us/azure/azure-portal/supportability/regional-quota-requests#feedback)
* Content Source: [articles/azure-portal/supportability/regional-quota-requests.md](https://github.com/Microsoft/azure-docs/blob/master/articles/azure-portal/supportability/regional-quota-requests.md)
* Service: **azure-supportability**
* GitHub Login: @sowmyavenkat86
* Microsoft Alias: **svenkat**",1,outdated process the process is azure portal is different from what is described here there is now a wizard like thing for requesting increases for different vcpu quotas and it seems total regional vcpu is automatically increased as well document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service azure supportability github login microsoft alias svenkat ,1
847,10904705011.0,IssuesEvent,2019-11-20 09:18:20,ToFuProject/tofu,https://api.github.com/repos/ToFuProject/tofu,closed,[CI] conda_build step is taking a long time,portability,"Since last update of `conda-build`, the time of the building library went up to 18 mins.",True,"[CI] conda_build step is taking a long time - Since last update of `conda-build`, the time of the building library went up to 18 mins.",1, conda build step is taking a long time since last update of conda build the time of the building library went up to mins ,1
1843,27260787820.0,IssuesEvent,2023-02-22 14:46:03,alcionai/corso,https://api.github.com/repos/alcionai/corso,closed,return graph error details when failing to fetch OneDrive item permissions,bug onedrive supportability,Details about the reason a call to fetch item permissions are not currently returned. This makes debugging difficult,True,return graph error details when failing to fetch OneDrive item permissions - Details about the reason a call to fetch item permissions are not currently returned. This makes debugging difficult,1,return graph error details when failing to fetch onedrive item permissions details about the reason a call to fetch item permissions are not currently returned this makes debugging difficult,1
1605,23245027128.0,IssuesEvent,2022-08-03 19:14:29,MicrosoftDocs/azure-docs,https://api.github.com/repos/MicrosoftDocs/azure-docs,closed,Wrong link for Azure subscription and service limits,azure-supportability/svc triaged assigned-to-author doc-enhancement Pri2,"The link for 'Azure subscription and service limits' links to this page: https://docs.microsoft.com/en-us/azure/azure-portal/supportability/classic-deployment-model-quota-increase-requests
Instead of https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/azure-subscription-service-limits
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: e19f68c5-3917-c131-a379-3b9e3156593b
* Version Independent ID: 89f78479-edd5-f96c-b342-31e43ef72c92
* Content: [Request an increase in Azure regional vCPU quota limits](https://docs.microsoft.com/en-us/azure/azure-portal/supportability/regional-quota-requests)
* Content Source: [articles/azure-portal/supportability/regional-quota-requests.md](https://github.com/Microsoft/azure-docs/blob/master/articles/azure-portal/supportability/regional-quota-requests.md)
* Service: **azure-supportability**
* GitHub Login: @sowmyavenkat86
* Microsoft Alias: **svenkat**",True,"Wrong link for Azure subscription and service limits - The link for 'Azure subscription and service limits' links to this page: https://docs.microsoft.com/en-us/azure/azure-portal/supportability/classic-deployment-model-quota-increase-requests
Instead of https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/azure-subscription-service-limits
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: e19f68c5-3917-c131-a379-3b9e3156593b
* Version Independent ID: 89f78479-edd5-f96c-b342-31e43ef72c92
* Content: [Request an increase in Azure regional vCPU quota limits](https://docs.microsoft.com/en-us/azure/azure-portal/supportability/regional-quota-requests)
* Content Source: [articles/azure-portal/supportability/regional-quota-requests.md](https://github.com/Microsoft/azure-docs/blob/master/articles/azure-portal/supportability/regional-quota-requests.md)
* Service: **azure-supportability**
* GitHub Login: @sowmyavenkat86
* Microsoft Alias: **svenkat**",1,wrong link for azure subscription and service limits the link for azure subscription and service limits links to this page instead of document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service azure supportability github login microsoft alias svenkat ,1
191912,22215867121.0,IssuesEvent,2022-06-08 01:31:44,Nivaskumark/kernel_v4.1.15,https://api.github.com/repos/Nivaskumark/kernel_v4.1.15,reopened,CVE-2021-28660 (High) detected in linux-stable-rtv4.1.33,security vulnerability,"## CVE-2021-28660 - High Severity Vulnerability
Vulnerable Library - linux-stable-rtv4.1.33
Julia Cartwright's fork of linux-stable-rt.git
Library home page: https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git
Found in HEAD commit: 00db4e8795bcbec692fb60b19160bdd763ad42e3
Found in base branch: master
Vulnerable Source Files (1)
/drivers/staging/rtl8188eu/os_dep/ioctl_linux.c
Vulnerability Details
rtw_wx_set_scan in drivers/staging/rtl8188eu/os_dep/ioctl_linux.c in the Linux kernel through 5.11.6 allows writing beyond the end of the ->ssid[] array. NOTE: from the perspective of kernel.org releases, CVE IDs are not normally used for drivers/staging/* (unfinished work); however, system integrators may have situations in which a drivers/staging issue is relevant to their own customer base.
Publish Date: 2021-03-17
URL: CVE-2021-28660
CVSS 3 Score Details (7.8 )
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
For more information on CVSS3 Scores, click here .
Suggested Fix
Type: Upgrade version
Origin: https://www.linuxkernelcves.com/cves/CVE-2021-28660
Release Date: 2021-03-17
Fix Resolution: v4.4.262,v4.9.262,v4.14.226,v4.19.181,v5.4.106,v5.10.24,v5.11.7,v5.12-rc3
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2021-28660 (High) detected in linux-stable-rtv4.1.33 - ## CVE-2021-28660 - High Severity Vulnerability
Vulnerable Library - linux-stable-rtv4.1.33
Julia Cartwright's fork of linux-stable-rt.git
Library home page: https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git
Found in HEAD commit: 00db4e8795bcbec692fb60b19160bdd763ad42e3
Found in base branch: master
Vulnerable Source Files (1)
/drivers/staging/rtl8188eu/os_dep/ioctl_linux.c
Vulnerability Details
rtw_wx_set_scan in drivers/staging/rtl8188eu/os_dep/ioctl_linux.c in the Linux kernel through 5.11.6 allows writing beyond the end of the ->ssid[] array. NOTE: from the perspective of kernel.org releases, CVE IDs are not normally used for drivers/staging/* (unfinished work); however, system integrators may have situations in which a drivers/staging issue is relevant to their own customer base.
Publish Date: 2021-03-17
URL: CVE-2021-28660
CVSS 3 Score Details (7.8 )
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
For more information on CVSS3 Scores, click here .
Suggested Fix
Type: Upgrade version
Origin: https://www.linuxkernelcves.com/cves/CVE-2021-28660
Release Date: 2021-03-17
Fix Resolution: v4.4.262,v4.9.262,v4.14.226,v4.19.181,v5.4.106,v5.10.24,v5.11.7,v5.12-rc3
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve high detected in linux stable cve high severity vulnerability vulnerable library linux stable julia cartwright s fork of linux stable rt git library home page a href found in head commit a href found in base branch master vulnerable source files drivers staging os dep ioctl linux c vulnerability details rtw wx set scan in drivers staging os dep ioctl linux c in the linux kernel through allows writing beyond the end of the ssid array note from the perspective of kernel org releases cve ids are not normally used for drivers staging unfinished work however system integrators may have situations in which a drivers staging issue is relevant to their own customer base publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource ,0
1114,14268204075.0,IssuesEvent,2020-11-20 21:58:04,IBM/FHIR,https://api.github.com/repos/IBM/FHIR,closed,Automate Push to Docker Certification Registry,cloud portability,Automate Push to Docker Certification Registry for IBM FHIR Server and IBM FHIR Server Schema Tool,True,Automate Push to Docker Certification Registry - Automate Push to Docker Certification Registry for IBM FHIR Server and IBM FHIR Server Schema Tool,1,automate push to docker certification registry automate push to docker certification registry for ibm fhir server and ibm fhir server schema tool,1
319,5867499824.0,IssuesEvent,2017-05-14 01:22:55,gvansickle/ucg,https://api.github.com/repos/gvansickle/ucg,closed,Compile error with clang++ 4.0.0 vs. DirTree,portability,"clang++ 4.0.0 doesn't like the *_basename_filter_type's in DirTree.h:
```
In file included from ../../../src/libext/DirTree.cpp:22:
../../../src/libext/DirTree.h:143:28: error: implicit instantiation of undefined template 'std::function &) noexcept>'
file_basename_filter_type m_file_basename_filter;
^
```
",True,"Compile error with clang++ 4.0.0 vs. DirTree - clang++ 4.0.0 doesn't like the *_basename_filter_type's in DirTree.h:
```
In file included from ../../../src/libext/DirTree.cpp:22:
../../../src/libext/DirTree.h:143:28: error: implicit instantiation of undefined template 'std::function &) noexcept>'
file_basename_filter_type m_file_basename_filter;
^
```
",1,compile error with clang vs dirtree clang doesn t like the basename filter type s in dirtree h in file included from src libext dirtree cpp src libext dirtree h error implicit instantiation of undefined template std function bool const std basic string noexcept file basename filter type m file basename filter ,1
434197,12515418458.0,IssuesEvent,2020-06-03 07:38:31,zeebe-io/zeebe,https://api.github.com/repos/zeebe-io/zeebe,closed,Client: Improve settings documentation,Scope: broker Status: Needs Priority Type: Maintenance,"The javadoc for a client setting should not only describe the direct technical effect, but also why it exists and why it may be relevant to change it (e.g. topic subscription capacity enables graceful handling of backpressure scenarios).",1.0,"Client: Improve settings documentation - The javadoc for a client setting should not only describe the direct technical effect, but also why it exists and why it may be relevant to change it (e.g. topic subscription capacity enables graceful handling of backpressure scenarios).",0,client improve settings documentation the javadoc for a client setting should not only describe the direct technical effect but also why it exists and why it may be relevant to change it e g topic subscription capacity enables graceful handling of backpressure scenarios ,0
433048,30308663004.0,IssuesEvent,2023-07-10 11:17:20,hwchase17/langchain,https://api.github.com/repos/hwchase17/langchain,closed,DOC: Bug in loading Chroma from disk (vectorstores/integrations/chroma),auto:bug auto:documentation,"### Issue with current documentation:
https://python.langchain.com/docs/modules/data_connection/vectorstores/integrations/chroma.html#basic-example-including-saving-to-disk
## Environment
- macOS
- Python 3.10.9
- langchain 0.0.228
- chromadb 0.3.26
Use https://github.com/hwchase17/langchain/blob/v0.0.228/docs/extras/modules/state_of_the_union.txt
## Procedure
1. Run the following Python script
ref: https://github.com/hwchase17/langchain/blob/v0.0.228/docs/extras/modules/data_connection/vectorstores/integrations/chroma.ipynb
```diff
# import
from langchain.embeddings.sentence_transformer import SentenceTransformerEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Chroma
from langchain.document_loaders import TextLoader
# load the document and split it into chunks
loader = TextLoader(""../../../state_of_the_union.txt"")
documents = loader.load()
# split it into chunks
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
# create the open-source embedding function
embedding_function = SentenceTransformerEmbeddings(model_name=""all-MiniLM-L6-v2"")
# load it into Chroma
db = Chroma.from_documents(docs, embedding_function)
# query it
query = ""What did the president say about Ketanji Brown Jackson""
docs = db.similarity_search(query)
# print results
print(docs[0].page_content)
# save to disk
db2 = Chroma.from_documents(docs, embedding_function, persist_directory=""./chroma_db"")
db2.persist()
-docs = db.similarity_search(query)
+docs = db2.similarity_search(query)
# load from disk
db3 = Chroma(persist_directory=""./chroma_db"")
-docs = db.similarity_search(query)
+docs = db3.similarity_search(query) # ValueError raised
print(docs[0].page_content)
```
## Expected behavior
`print(docs[0].page_content)` with db3
## Actual behavior
>ValueError: You must provide embeddings or a function to compute them
```
Traceback (most recent call last):
File ""/.../issue_report.py"", line 35, in
docs = db3.similarity_search(query)
File ""/.../venv/lib/python3.10/site-packages/langchain/vectorstores/chroma.py"", line 174, in similarity_search
docs_and_scores = self.similarity_search_with_score(query, k, filter=filter)
File ""/.../venv/lib/python3.10/site-packages/langchain/vectorstores/chroma.py"", line 242, in similarity_search_with_score
results = self.__query_collection(
File ""/.../venv/lib/python3.10/site-packages/langchain/utils.py"", line 55, in wrapper
return func(*args, **kwargs)
File ""/.../venv/lib/python3.10/site-packages/langchain/vectorstores/chroma.py"", line 121, in __query_collection
return self._collection.query(
File ""/.../venv/lib/python3.10/site-packages/chromadb/api/models/Collection.py"", line 209, in query
raise ValueError(
ValueError: You must provide embeddings or a function to compute them
```
### Idea or request for content:
Fixed by specifying the `embedding_function` parameter.
```diff
-db3 = Chroma(persist_directory=""./chroma_db"")
+db3 = Chroma(persist_directory=""./chroma_db"", embedding_function=embedding_function)
docs = db3.similarity_search(query)
print(docs[0].page_content)
```
(Added) ref: https://github.com/hwchase17/langchain/blob/v0.0.228/langchain/vectorstores/chroma.py#L62",1.0,"DOC: Bug in loading Chroma from disk (vectorstores/integrations/chroma) - ### Issue with current documentation:
https://python.langchain.com/docs/modules/data_connection/vectorstores/integrations/chroma.html#basic-example-including-saving-to-disk
## Environment
- macOS
- Python 3.10.9
- langchain 0.0.228
- chromadb 0.3.26
Use https://github.com/hwchase17/langchain/blob/v0.0.228/docs/extras/modules/state_of_the_union.txt
## Procedure
1. Run the following Python script
ref: https://github.com/hwchase17/langchain/blob/v0.0.228/docs/extras/modules/data_connection/vectorstores/integrations/chroma.ipynb
```diff
# import
from langchain.embeddings.sentence_transformer import SentenceTransformerEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Chroma
from langchain.document_loaders import TextLoader
# load the document and split it into chunks
loader = TextLoader(""../../../state_of_the_union.txt"")
documents = loader.load()
# split it into chunks
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
# create the open-source embedding function
embedding_function = SentenceTransformerEmbeddings(model_name=""all-MiniLM-L6-v2"")
# load it into Chroma
db = Chroma.from_documents(docs, embedding_function)
# query it
query = ""What did the president say about Ketanji Brown Jackson""
docs = db.similarity_search(query)
# print results
print(docs[0].page_content)
# save to disk
db2 = Chroma.from_documents(docs, embedding_function, persist_directory=""./chroma_db"")
db2.persist()
-docs = db.similarity_search(query)
+docs = db2.similarity_search(query)
# load from disk
db3 = Chroma(persist_directory=""./chroma_db"")
-docs = db.similarity_search(query)
+docs = db3.similarity_search(query) # ValueError raised
print(docs[0].page_content)
```
## Expected behavior
`print(docs[0].page_content)` with db3
## Actual behavior
>ValueError: You must provide embeddings or a function to compute them
```
Traceback (most recent call last):
File ""/.../issue_report.py"", line 35, in
docs = db3.similarity_search(query)
File ""/.../venv/lib/python3.10/site-packages/langchain/vectorstores/chroma.py"", line 174, in similarity_search
docs_and_scores = self.similarity_search_with_score(query, k, filter=filter)
File ""/.../venv/lib/python3.10/site-packages/langchain/vectorstores/chroma.py"", line 242, in similarity_search_with_score
results = self.__query_collection(
File ""/.../venv/lib/python3.10/site-packages/langchain/utils.py"", line 55, in wrapper
return func(*args, **kwargs)
File ""/.../venv/lib/python3.10/site-packages/langchain/vectorstores/chroma.py"", line 121, in __query_collection
return self._collection.query(
File ""/.../venv/lib/python3.10/site-packages/chromadb/api/models/Collection.py"", line 209, in query
raise ValueError(
ValueError: You must provide embeddings or a function to compute them
```
### Idea or request for content:
Fixed by specifying the `embedding_function` parameter.
```diff
-db3 = Chroma(persist_directory=""./chroma_db"")
+db3 = Chroma(persist_directory=""./chroma_db"", embedding_function=embedding_function)
docs = db3.similarity_search(query)
print(docs[0].page_content)
```
(Added) ref: https://github.com/hwchase17/langchain/blob/v0.0.228/langchain/vectorstores/chroma.py#L62",0,doc bug in loading chroma from disk vectorstores integrations chroma issue with current documentation environment macos python langchain chromadb use procedure run the following python script ref diff import from langchain embeddings sentence transformer import sentencetransformerembeddings from langchain text splitter import charactertextsplitter from langchain vectorstores import chroma from langchain document loaders import textloader load the document and split it into chunks loader textloader state of the union txt documents loader load split it into chunks text splitter charactertextsplitter chunk size chunk overlap docs text splitter split documents documents create the open source embedding function embedding function sentencetransformerembeddings model name all minilm load it into chroma db chroma from documents docs embedding function query it query what did the president say about ketanji brown jackson docs db similarity search query print results print docs page content save to disk chroma from documents docs embedding function persist directory chroma db persist docs db similarity search query docs similarity search query load from disk chroma persist directory chroma db docs db similarity search query docs similarity search query valueerror raised print docs page content expected behavior print docs page content with actual behavior valueerror you must provide embeddings or a function to compute them traceback most recent call last file issue report py line in docs similarity search query file venv lib site packages langchain vectorstores chroma py line in similarity search docs and scores self similarity search with score query k filter filter file venv lib site packages langchain vectorstores chroma py line in similarity search with score results self query collection file venv lib site packages langchain utils py line in wrapper return func args kwargs file venv lib site packages langchain vectorstores chroma py line in query collection return self collection query file venv lib site packages chromadb api models collection py line in query raise valueerror valueerror you must provide embeddings or a function to compute them idea or request for content fixed by specifying the embedding function parameter diff chroma persist directory chroma db chroma persist directory chroma db embedding function embedding function docs similarity search query print docs page content added ref ,0
532,7529236386.0,IssuesEvent,2018-04-14 02:08:43,chapel-lang/chapel,https://api.github.com/repos/chapel-lang/chapel,closed,Chapel program execution hangs on Titan,area: BTR area: Runtime area: Third-Party type: Bug type: Portability user issue,"
### Summary of Problem
I installed and built the latest version of Chapel (1.17.0) on Titan Cray XK7 at ORNL and the execution stalls until timeout (5 minutes) when attempting to run a simple ""hello world"" Chapel program over 4 nodes. The prebuilt Chapel module (1.10.0) works and I also installed Chapel 1.10.0 which works. Howver all attempted later versions of Chapel (1.12.0, 1.13.0, 1.15.0, and 1.17.0) cause the execution to hang.
### Steps to Reproduce
**Source Code:**
config const numTasks = here.maxTaskPar;
forall tid in 0..#numTasks do
writeln(""Hello from task "" + tid);
**Compile command:**
chpl --fast -o hello hello.chpl
**Execution command:**
aprun -cc none -d16 -n4 -N1 -j0 ./hello_real -nl4 --numTasks=16 --verbose
**Associated Future Test(s):**
### Configuration Information
- Output of `chpl --version`:
1.17.0, 1.15.0 1.13.0 and also 1.12.0 (using dlmalloc instead of jemalloc)
- Output of `$CHPL_HOME/util/printchplenv --anonymize`:
machine info: Linux titan-ext4 3.0.101-0.47.106.11-default #1 SMP Tue Jan 2 10:46:28 UTC 2018 (afabf7c) x86_64
CHPL_HOME: /lustre/atlas/scratch/user/project/framework/chapel/chapel-1.17.0 *
script location: /lustre/atlas2/project/scratch/user/framework/chapel/chapel-1.17.0/util
CHPL_TARGET_PLATFORM: cray-xe
CHPL_TARGET_COMPILER: cray-prgenv-pgi
CHPL_TARGET_ARCH: interlagos
CHPL_LOCALE_MODEL: flat
CHPL_COMM: gasnet *
CHPL_COMM_SUBSTRATE: gemini
CHPL_GASNET_SEGMENT: fast
CHPL_TASKS: qthreads
CHPL_LAUNCHER: aprun
CHPL_TIMERS: generic
CHPL_UNWIND: none
CHPL_MEM: jemalloc
CHPL_ATOMICS: locks
CHPL_NETWORK_ATOMICS: none
CHPL_GMP: system
CHPL_HWLOC: hwloc
CHPL_REGEXP: none
CHPL_AUX_FILESYS: none
- Back-end compiler and version, e.g. `gcc --version` or `clang --version`:
gcc (SUSE Linux) 4.3.4 [gcc-4_3-branch revision 152973]
- (For Cray systems only) Output of `module list`:
Currently Loaded Modulefiles:
1) eswrap/1.3.3-1.020200.1280.0
2) craype-network-gemini
3) pgi/17.9.0
4) craype/2.5.13
5) cray-libsci/16.11.1
6) udreg/2.3.2-1.0502.10518.2.17.gem
7) ugni/6.0-1.0502.10863.8.28.gem
8) pmi/5.0.12
9) dmapp/7.0.1-1.0502.11080.8.74.gem
10) gni-headers/4.0-1.0502.10859.7.8.gem
11) xpmem/0.1-2.0502.64982.5.3.gem
12) dvs/2.5_0.9.0-1.0502.2188.1.113.gem
13) alps/5.2.4-2.0502.9774.31.12.gem
14) rca/1.0.0-2.0502.60530.1.63.gem
15) atp/2.1.1
16) PrgEnv-pgi/5.2.82
17) cray-mpich/7.6.3
18) craype-interlagos
19) lustredu/1.4
20) xalt/0.7.5
21) git/2.13.0
22) module_msg/0.1
23) modulator/1.2.0
24) hsi/5.0.2.p1
25) DefApps
",True,"Chapel program execution hangs on Titan -
### Summary of Problem
I installed and built the latest version of Chapel (1.17.0) on Titan Cray XK7 at ORNL and the execution stalls until timeout (5 minutes) when attempting to run a simple ""hello world"" Chapel program over 4 nodes. The prebuilt Chapel module (1.10.0) works and I also installed Chapel 1.10.0 which works. Howver all attempted later versions of Chapel (1.12.0, 1.13.0, 1.15.0, and 1.17.0) cause the execution to hang.
### Steps to Reproduce
**Source Code:**
config const numTasks = here.maxTaskPar;
forall tid in 0..#numTasks do
writeln(""Hello from task "" + tid);
**Compile command:**
chpl --fast -o hello hello.chpl
**Execution command:**
aprun -cc none -d16 -n4 -N1 -j0 ./hello_real -nl4 --numTasks=16 --verbose
**Associated Future Test(s):**
### Configuration Information
- Output of `chpl --version`:
1.17.0, 1.15.0 1.13.0 and also 1.12.0 (using dlmalloc instead of jemalloc)
- Output of `$CHPL_HOME/util/printchplenv --anonymize`:
machine info: Linux titan-ext4 3.0.101-0.47.106.11-default #1 SMP Tue Jan 2 10:46:28 UTC 2018 (afabf7c) x86_64
CHPL_HOME: /lustre/atlas/scratch/user/project/framework/chapel/chapel-1.17.0 *
script location: /lustre/atlas2/project/scratch/user/framework/chapel/chapel-1.17.0/util
CHPL_TARGET_PLATFORM: cray-xe
CHPL_TARGET_COMPILER: cray-prgenv-pgi
CHPL_TARGET_ARCH: interlagos
CHPL_LOCALE_MODEL: flat
CHPL_COMM: gasnet *
CHPL_COMM_SUBSTRATE: gemini
CHPL_GASNET_SEGMENT: fast
CHPL_TASKS: qthreads
CHPL_LAUNCHER: aprun
CHPL_TIMERS: generic
CHPL_UNWIND: none
CHPL_MEM: jemalloc
CHPL_ATOMICS: locks
CHPL_NETWORK_ATOMICS: none
CHPL_GMP: system
CHPL_HWLOC: hwloc
CHPL_REGEXP: none
CHPL_AUX_FILESYS: none
- Back-end compiler and version, e.g. `gcc --version` or `clang --version`:
gcc (SUSE Linux) 4.3.4 [gcc-4_3-branch revision 152973]
- (For Cray systems only) Output of `module list`:
Currently Loaded Modulefiles:
1) eswrap/1.3.3-1.020200.1280.0
2) craype-network-gemini
3) pgi/17.9.0
4) craype/2.5.13
5) cray-libsci/16.11.1
6) udreg/2.3.2-1.0502.10518.2.17.gem
7) ugni/6.0-1.0502.10863.8.28.gem
8) pmi/5.0.12
9) dmapp/7.0.1-1.0502.11080.8.74.gem
10) gni-headers/4.0-1.0502.10859.7.8.gem
11) xpmem/0.1-2.0502.64982.5.3.gem
12) dvs/2.5_0.9.0-1.0502.2188.1.113.gem
13) alps/5.2.4-2.0502.9774.31.12.gem
14) rca/1.0.0-2.0502.60530.1.63.gem
15) atp/2.1.1
16) PrgEnv-pgi/5.2.82
17) cray-mpich/7.6.3
18) craype-interlagos
19) lustredu/1.4
20) xalt/0.7.5
21) git/2.13.0
22) module_msg/0.1
23) modulator/1.2.0
24) hsi/5.0.2.p1
25) DefApps
",1,chapel program execution hangs on titan if you are filing an issue that is not a bug report please feel free to erase this template and describe the issue as clearly as possible summary of problem what behavior did you observe when encountering this issue what behavior did you expect to observe is this a blocking issue with no known work arounds i installed and built the latest version of chapel on titan cray at ornl and the execution stalls until timeout minutes when attempting to run a simple hello world chapel program over nodes the prebuilt chapel module works and i also installed chapel which works howver all attempted later versions of chapel and cause the execution to hang steps to reproduce source code config const numtasks here maxtaskpar forall tid in numtasks do writeln hello from task tid compile command chpl fast o hello hello chpl execution command e g foo nl if an input file is required include it as well aprun cc none hello real numtasks verbose associated future test s are there any tests in chapel s test system that demonstrate this issue e g configuration information output of chpl version and also using dlmalloc instead of jemalloc output of chpl home util printchplenv anonymize machine info linux titan default smp tue jan utc chpl home lustre atlas scratch user project framework chapel chapel script location lustre project scratch user framework chapel chapel util chpl target platform cray xe chpl target compiler cray prgenv pgi chpl target arch interlagos chpl locale model flat chpl comm gasnet chpl comm substrate gemini chpl gasnet segment fast chpl tasks qthreads chpl launcher aprun chpl timers generic chpl unwind none chpl mem jemalloc chpl atomics locks chpl network atomics none chpl gmp system chpl hwloc hwloc chpl regexp none chpl aux filesys none back end compiler and version e g gcc version or clang version gcc suse linux for cray systems only output of module list currently loaded modulefiles eswrap craype network gemini pgi craype cray libsci udreg gem ugni gem pmi dmapp gem gni headers gem xpmem gem dvs gem alps gem rca gem atp prgenv pgi cray mpich craype interlagos lustredu xalt git module msg modulator hsi defapps ,1
1620,23347612308.0,IssuesEvent,2022-08-09 19:36:15,chapel-lang/chapel,https://api.github.com/repos/chapel-lang/chapel,closed,imprecise specification of 'env' used in slurm launch command,area: Runtime user issue type: Portability easy / straightforward,"Just a minor nit, but it caused me some heartburn recently so....
When running on a cluster I had trouble running a chapel program:
(I elided the -E arguments)
```
%hello -nl 2 -v
salloc --quiet -J CHPL-hello -N 2 --ntasks-per-node=1 --exclusive /opt/chapel-1.27.0/third-party/gasnet/install/hpe-apollo-x86-skylake-avx512-llvm-none/substrate-ibv/seg-large/bin/gasnetrun_ibv -n 2 -N 2 -c 9 -E '...' env LANG=en_US.utf8 LC_ALL= LC_COLLATE= /home/user/hello_real -nl 2 -v
:1: error: Unexpected flag: ""LANG-en_US.utf8""
:1: error: Unexpected flag: ""LANG-en_US.utf8""
----------------------------------------------------------------
Primary job terminated normally, but 1 process return
a non-zero exit code. Per user-direction, the job has been aborted.
----------------------------------------------------------------
----------------------------------------------------------------
mpirun detected that one or more processes exited with non-zero status, thus causing
the job to be terminate. The first process to do so was:
.
.
.
```
It eventually turned out that the problem was that I happened to have a program called 'env' in my path and that was getting called instead of the system version. (env happened to be a chapel program and didn't really give me much of a clue that it was the thing that was unhappy).
Anyway, boneheaded move on my part, but maybe chapel should consider being more specific about the 'env' that it wants to use.
",True,"imprecise specification of 'env' used in slurm launch command - Just a minor nit, but it caused me some heartburn recently so....
When running on a cluster I had trouble running a chapel program:
(I elided the -E arguments)
```
%hello -nl 2 -v
salloc --quiet -J CHPL-hello -N 2 --ntasks-per-node=1 --exclusive /opt/chapel-1.27.0/third-party/gasnet/install/hpe-apollo-x86-skylake-avx512-llvm-none/substrate-ibv/seg-large/bin/gasnetrun_ibv -n 2 -N 2 -c 9 -E '...' env LANG=en_US.utf8 LC_ALL= LC_COLLATE= /home/user/hello_real -nl 2 -v
:1: error: Unexpected flag: ""LANG-en_US.utf8""
:1: error: Unexpected flag: ""LANG-en_US.utf8""
----------------------------------------------------------------
Primary job terminated normally, but 1 process return
a non-zero exit code. Per user-direction, the job has been aborted.
----------------------------------------------------------------
----------------------------------------------------------------
mpirun detected that one or more processes exited with non-zero status, thus causing
the job to be terminate. The first process to do so was:
.
.
.
```
It eventually turned out that the problem was that I happened to have a program called 'env' in my path and that was getting called instead of the system version. (env happened to be a chapel program and didn't really give me much of a clue that it was the thing that was unhappy).
Anyway, boneheaded move on my part, but maybe chapel should consider being more specific about the 'env' that it wants to use.
",1,imprecise specification of env used in slurm launch command just a minor nit but it caused me some heartburn recently so when running on a cluster i had trouble running a chapel program i elided the e arguments hello nl v salloc quiet j chpl hello n ntasks per node exclusive opt chapel third party gasnet install hpe apollo skylake llvm none substrate ibv seg large bin gasnetrun ibv n n c e env lang en us lc all lc collate home user hello real nl v error unexpected flag lang en us error unexpected flag lang en us primary job terminated normally but process return a non zero exit code per user direction the job has been aborted mpirun detected that one or more processes exited with non zero status thus causing the job to be terminate the first process to do so was it eventually turned out that the problem was that i happened to have a program called env in my path and that was getting called instead of the system version env happened to be a chapel program and didn t really give me much of a clue that it was the thing that was unhappy anyway boneheaded move on my part but maybe chapel should consider being more specific about the env that it wants to use ,1
168923,13107866001.0,IssuesEvent,2020-08-04 15:53:57,cockroachdb/cockroach,https://api.github.com/repos/cockroachdb/cockroach,closed,roachtest: sqlsmith/setup=seed/setting=no-ddl failed,C-test-failure O-roachtest O-robot branch-provisional_202008031850_v20.2.0-alpha.3 release-blocker,"[(roachtest).sqlsmith/setup=seed/setting=no-ddl failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2143445&tab=buildLog) on [provisional_202008031850_v20.2.0-alpha.3@b0ec3c9834d59be51e7af8a070bbef8b20f2d62a](https://github.com/cockroachdb/cockroach/commits/b0ec3c9834d59be51e7af8a070bbef8b20f2d62a):
```
INSERT
INTO
defaultdb.public.seed AS tab_2421
WITH
with_798 (col_6280) AS (SELECT * FROM (VALUES ((-1.569702273301134):::FLOAT8)) AS tab_2422 (col_6280))
SELECT
4917:::INT8 AS col_6281,
(
SELECT
(-319347211):::INT8 AS col_6282
FROM
defaultdb.public.seed@seed__int8__float8__date_idx AS tab_2423, defaultdb.public.seed@[0] AS tab_2424
WHERE
false
ORDER BY
tab_2424._float8 ASC, tab_2424._float4 ASC, tab_2423._timestamptz
LIMIT
1:::INT8
)
AS col_6283,
crdb_internal.destroy_tenant((-3469236868022851591):::INT8::INT8)::INT8 AS col_6284,
cte_ref_218.col_6280 AS col_6285,
var_pop(cte_ref_218.col_6280::FLOAT8) OVER (PARTITION BY cte_ref_218.col_6280 RANGE CURRENT ROW)::FLOAT8
AS col_6286,
now():::DATE::DATE AS col_6287,
current_timestamp((-5367689394412957846):::INT8::INT8):::TIMESTAMP::TIMESTAMP AS col_6288,
experimental_follower_read_timestamp()::TIMESTAMPTZ AS col_6289,
date_trunc(e'dSwj\rx^p*':::STRING::STRING, '11:11:22.724072':::TIME::TIME)::INTERVAL AS col_6290,
st_covers('010500000009000000010200000002000000000000000000F07F000000000000F07F000000000000F07F000000000000F07F010200000004000000000000000000F07F000000000000F07F000000000000F07F000000000000F07F000000000000F07F000000000000F07F000000000000F07F000000000000F07F010200000005000000000000000000F07F000000000000F07F000000000000F07F000000000000F07F000000000000F07F000000000000F07F000000000000F07F000000000000F07F000000000000F07F000000000000F07F010200000002000000000000000000F07F000000000000F07F000000000000F07F000000000000F07F010200000002000000000000000000F07F000000000000F07F000000000000F07F000000000000F07F010200000004000000000000000000F07F000000000000F07F000000000000F07F000000000000F07F000000000000F07F000000000000F07F000000000000F07F000000000000F07F010200000002000000000000000000F07F000000000000F07F000000000000F07F000000000000F07F010200000006000000000000000000F07F000000000000F07F000000000000F07F000000000000F07F000000000000F07F000000000000F07F000000000000F07F000000000000F07F000000000000F07F000000000000F07F000000000000F07F000000000000F07F010200000004000000000000000000F07F000000000000F07F000000000000F07F000000000000F07F000000000000F07F000000000000F07F000000000000F07F000000000000F07F':::GEOMETRY::GEOMETRY, '0101000000000000000000F07F000000000000F07F':::GEOMETRY::GEOMETRY)::BOOL
AS col_6291,
exp((-1):::DECIMAL::DECIMAL)::DECIMAL AS col_6292,
overlay('K !cc':::STRING::STRING, NULL::STRING, NULL::INT8)::STRING AS col_6293,
crdb_internal.encode_key(NULL::INT8, NULL::INT8, B'0111111111111111111111111111111111111111111111111111111111111111')::BYTES
AS col_6294,
gen_random_uuid()::UUID AS col_6295,
netmask(broadcast(NULL::INET)::INET::INET)::INET AS col_6296,
json_agg(cte_ref_218.col_6280) OVER (PARTITION BY cte_ref_218.col_6280 RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)::JSONB
AS col_6297,
'hello':::greeting AS col_6298,
pow((-7.92966261239411176E+31):::DECIMAL::DECIMAL, (-90438879243.33829799):::DECIMAL::DECIMAL)::DECIMAL AS col_6299
FROM
with_798 AS cte_ref_218
WHERE
true
GROUP BY
cte_ref_218.col_6280
HAVING
st_covers('X':::STRING::STRING, st_ashexewkb('010200000004000000000000000000F07F000000000000F07F000000000000F07F000000000000F07F000000000000F07F000000000000F07F000000000000F07F000000000000F07F':::GEOMETRY::GEOMETRY)::STRING::STRING)::BOOL
ORDER BY
cte_ref_218.col_6280 DESC, cte_ref_218.col_6280 DESC;
```
More
Artifacts: [/sqlsmith/setup=seed/setting=no-ddl](https://teamcity.cockroachdb.com/viewLog.html?buildId=2143445&tab=artifacts#/sqlsmith/setup=seed/setting=no-ddl)
Related:
- #51313 roachtest: sqlsmith/setup=seed/setting=no-ddl failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-provisional_202007081918_v20.2.0-alpha.2](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-provisional_202007081918_v20.2.0-alpha.2) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
- #48269 roachtest: sqlsmith/setup=seed/setting=no-ddl failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-master](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-master) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Asqlsmith%2Fsetup%3Dseed%2Fsetting%3Dno-ddl.%2A&sort=title&restgroup=false&display=lastcommented+project)
powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
",2.0,"roachtest: sqlsmith/setup=seed/setting=no-ddl failed - [(roachtest).sqlsmith/setup=seed/setting=no-ddl failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2143445&tab=buildLog) on [provisional_202008031850_v20.2.0-alpha.3@b0ec3c9834d59be51e7af8a070bbef8b20f2d62a](https://github.com/cockroachdb/cockroach/commits/b0ec3c9834d59be51e7af8a070bbef8b20f2d62a):
```
INSERT
INTO
defaultdb.public.seed AS tab_2421
WITH
with_798 (col_6280) AS (SELECT * FROM (VALUES ((-1.569702273301134):::FLOAT8)) AS tab_2422 (col_6280))
SELECT
4917:::INT8 AS col_6281,
(
SELECT
(-319347211):::INT8 AS col_6282
FROM
defaultdb.public.seed@seed__int8__float8__date_idx AS tab_2423, defaultdb.public.seed@[0] AS tab_2424
WHERE
false
ORDER BY
tab_2424._float8 ASC, tab_2424._float4 ASC, tab_2423._timestamptz
LIMIT
1:::INT8
)
AS col_6283,
crdb_internal.destroy_tenant((-3469236868022851591):::INT8::INT8)::INT8 AS col_6284,
cte_ref_218.col_6280 AS col_6285,
var_pop(cte_ref_218.col_6280::FLOAT8) OVER (PARTITION BY cte_ref_218.col_6280 RANGE CURRENT ROW)::FLOAT8
AS col_6286,
now():::DATE::DATE AS col_6287,
current_timestamp((-5367689394412957846):::INT8::INT8):::TIMESTAMP::TIMESTAMP AS col_6288,
experimental_follower_read_timestamp()::TIMESTAMPTZ AS col_6289,
date_trunc(e'dSwj\rx^p*':::STRING::STRING, '11:11:22.724072':::TIME::TIME)::INTERVAL AS col_6290,
st_covers('010500000009000000010200000002000000000000000000F07F000000000000F07F000000000000F07F000000000000F07F010200000004000000000000000000F07F000000000000F07F000000000000F07F000000000000F07F000000000000F07F000000000000F07F000000000000F07F000000000000F07F010200000005000000000000000000F07F000000000000F07F000000000000F07F000000000000F07F000000000000F07F000000000000F07F000000000000F07F000000000000F07F000000000000F07F000000000000F07F010200000002000000000000000000F07F000000000000F07F000000000000F07F000000000000F07F010200000002000000000000000000F07F000000000000F07F000000000000F07F000000000000F07F010200000004000000000000000000F07F000000000000F07F000000000000F07F000000000000F07F000000000000F07F000000000000F07F000000000000F07F000000000000F07F010200000002000000000000000000F07F000000000000F07F000000000000F07F000000000000F07F010200000006000000000000000000F07F000000000000F07F000000000000F07F000000000000F07F000000000000F07F000000000000F07F000000000000F07F000000000000F07F000000000000F07F000000000000F07F000000000000F07F000000000000F07F010200000004000000000000000000F07F000000000000F07F000000000000F07F000000000000F07F000000000000F07F000000000000F07F000000000000F07F000000000000F07F':::GEOMETRY::GEOMETRY, '0101000000000000000000F07F000000000000F07F':::GEOMETRY::GEOMETRY)::BOOL
AS col_6291,
exp((-1):::DECIMAL::DECIMAL)::DECIMAL AS col_6292,
overlay('K !cc':::STRING::STRING, NULL::STRING, NULL::INT8)::STRING AS col_6293,
crdb_internal.encode_key(NULL::INT8, NULL::INT8, B'0111111111111111111111111111111111111111111111111111111111111111')::BYTES
AS col_6294,
gen_random_uuid()::UUID AS col_6295,
netmask(broadcast(NULL::INET)::INET::INET)::INET AS col_6296,
json_agg(cte_ref_218.col_6280) OVER (PARTITION BY cte_ref_218.col_6280 RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)::JSONB
AS col_6297,
'hello':::greeting AS col_6298,
pow((-7.92966261239411176E+31):::DECIMAL::DECIMAL, (-90438879243.33829799):::DECIMAL::DECIMAL)::DECIMAL AS col_6299
FROM
with_798 AS cte_ref_218
WHERE
true
GROUP BY
cte_ref_218.col_6280
HAVING
st_covers('X':::STRING::STRING, st_ashexewkb('010200000004000000000000000000F07F000000000000F07F000000000000F07F000000000000F07F000000000000F07F000000000000F07F000000000000F07F000000000000F07F':::GEOMETRY::GEOMETRY)::STRING::STRING)::BOOL
ORDER BY
cte_ref_218.col_6280 DESC, cte_ref_218.col_6280 DESC;
```
More
Artifacts: [/sqlsmith/setup=seed/setting=no-ddl](https://teamcity.cockroachdb.com/viewLog.html?buildId=2143445&tab=artifacts#/sqlsmith/setup=seed/setting=no-ddl)
Related:
- #51313 roachtest: sqlsmith/setup=seed/setting=no-ddl failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-provisional_202007081918_v20.2.0-alpha.2](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-provisional_202007081918_v20.2.0-alpha.2) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
- #48269 roachtest: sqlsmith/setup=seed/setting=no-ddl failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-master](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-master) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Asqlsmith%2Fsetup%3Dseed%2Fsetting%3Dno-ddl.%2A&sort=title&restgroup=false&display=lastcommented+project)
powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
",0,roachtest sqlsmith setup seed setting no ddl failed on insert into defaultdb public seed as tab with with col as select from values as tab col select as col select as col from defaultdb public seed seed date idx as tab defaultdb public seed as tab where false order by tab asc tab asc tab timestamptz limit as col crdb internal destroy tenant as col cte ref col as col var pop cte ref col over partition by cte ref col range current row as col now date date as col current timestamp timestamp timestamp as col experimental follower read timestamp timestamptz as col date trunc e dswj rx p string string time time interval as col st covers geometry geometry geometry geometry bool as col exp decimal decimal decimal as col overlay k cc string string null string null string as col crdb internal encode key null null b bytes as col gen random uuid uuid as col netmask broadcast null inet inet inet inet as col json agg cte ref col over partition by cte ref col range between unbounded preceding and unbounded following jsonb as col hello greeting as col pow decimal decimal decimal decimal decimal as col from with as cte ref where true group by cte ref col having st covers x string string st ashexewkb geometry geometry string string bool order by cte ref col desc cte ref col desc more artifacts related roachtest sqlsmith setup seed setting no ddl failed roachtest sqlsmith setup seed setting no ddl failed powered by ,0
506157,14659484591.0,IssuesEvent,2020-12-28 20:37:20,ita-social-projects/horondi_client_fe,https://api.github.com/repos/ita-social-projects/horondi_client_fe,opened,[Категорії page] UI doesn't correspond to the mockup in the light theme,UI bug priority: medium,"**Environment:** Windows 10 Pro, Google Chrome, version 86.0.4240.183.
**Reproducible:** always.
**Build found:** commit 79f5645
**Preconditions:**
Go to https://horondi-admin-staging.azurewebsites.net
Log into Administrator page as Administrator
**Steps to reproduce:**
Go to 'Категорії' menu item
Check the UI of the page
**Actual result:**
**Expected result:**
",1.0,"[Категорії page] UI doesn't correspond to the mockup in the light theme - **Environment:** Windows 10 Pro, Google Chrome, version 86.0.4240.183.
**Reproducible:** always.
**Build found:** commit 79f5645
**Preconditions:**
Go to https://horondi-admin-staging.azurewebsites.net
Log into Administrator page as Administrator
**Steps to reproduce:**
Go to 'Категорії' menu item
Check the UI of the page
**Actual result:**
**Expected result:**
",0, ui doesn t correspond to the mockup in the light theme environment windows pro google chrome version reproducible always build found commit preconditions go to log into administrator page as administrator steps to reproduce go to категорії menu item check the ui of the page actual result img width alt category main page ui defects src expected result img width alt category page src ,0
621,8390559962.0,IssuesEvent,2018-10-09 12:58:37,systemd/systemd,https://api.github.com/repos/systemd/systemd,closed,"protabled: don't insist on "".raw"" suffix for raw images so strictly",bug 🐛 has-pr ✨ portable,"**systemd version the issue has been seen with**
> 239
**Used distribution**
> Arch
**Expected behaviour you didn't see**
> I have a squashfs image `image.img`. If I mount that at `/tmp/image`, then `sudo /lib/systemd/portablectl inspect /tmp/image` reports metadata fine. I expected `sudo /lib/systemd/portablectl inspect image.img` to work as well.
**Unexpected behaviour you saw**
> The command printed `Failed to inspect image metadata: Wrong medium type`. `portablectl attach` also prints the `Wrong medium type` message.
**Steps to reproduce the problem**
First build a squashfs image that reproduces the problem.
Save the following to `repro.nix`.
```nix
with (import (fetchTarball {
url = ""https://github.com/NixOS/nixpkgs/archive/56b9f6fc8e1c3a4ad10ff7c61e461d7b7e038833.tar.gz"";
sha256 = ""0v5y4wjfxbappsaibw88h7n1flcx7kpvg51mjv3h9m4aa3fn2c8q"";
}) {});
let
imageDir = stdenv.mkDerivation {
name = ""repro-filesystem"";
buildCommand = ''
mkdir -p $out/usr/lib
mkdir -p $out/usr/lib/systemd/system
echo ""PORTABLE_PRETTY_NAME=repro"" > $out/usr/lib/os-release
touch $out/usr/lib/systemd/system/repro.service
'';
};
in
stdenv.mkDerivation {
name = ""repro.img"";
nativeBuildInputs = [ squashfsTools ];
buildInputs = [ imageDir ];
buildCommand =
''
mksquashfs ${imageDir} $out \
-no-fragments \
-processors 1 \
-all-root \
-b 1048576 \
-comp xz \
-Xdict-size 100% \
'';
}
```
Run [`nix build -f repro.nix --out-link repro.img`](https://nixos.org/nix/).
Then:
```
$ file $(realpath repro.img)
/nix/store/cildzy70a6jgk5cq1d3mnbm7bg2q2wxh-repro.img: Squashfs filesystem, little endian, version 4.0, 493 bytes, 7 inodes, blocksize: 1048576 bytes, created: Thu Jan 1 00:00:01 1970
$ mkdir /tmp/repro
$ sudo mount repro.img /tmp/repro
$ sudo /usr/lib/systemd/portablectl inspect /tmp/repro
(Matching unit files with prefix 'repro'.)
Image:
/tmp/repro
Portable Service:
repro
Operating System:
n/a
Unit files:
repro.service
$ sudo /usr/lib/systemd/portablectl inspect ./repro.img
(Matching unit files with prefix 'repro.img'.)
Failed to inspect image metadata: Wrong medium type
```",True,"protabled: don't insist on "".raw"" suffix for raw images so strictly - **systemd version the issue has been seen with**
> 239
**Used distribution**
> Arch
**Expected behaviour you didn't see**
> I have a squashfs image `image.img`. If I mount that at `/tmp/image`, then `sudo /lib/systemd/portablectl inspect /tmp/image` reports metadata fine. I expected `sudo /lib/systemd/portablectl inspect image.img` to work as well.
**Unexpected behaviour you saw**
> The command printed `Failed to inspect image metadata: Wrong medium type`. `portablectl attach` also prints the `Wrong medium type` message.
**Steps to reproduce the problem**
First build a squashfs image that reproduces the problem.
Save the following to `repro.nix`.
```nix
with (import (fetchTarball {
url = ""https://github.com/NixOS/nixpkgs/archive/56b9f6fc8e1c3a4ad10ff7c61e461d7b7e038833.tar.gz"";
sha256 = ""0v5y4wjfxbappsaibw88h7n1flcx7kpvg51mjv3h9m4aa3fn2c8q"";
}) {});
let
imageDir = stdenv.mkDerivation {
name = ""repro-filesystem"";
buildCommand = ''
mkdir -p $out/usr/lib
mkdir -p $out/usr/lib/systemd/system
echo ""PORTABLE_PRETTY_NAME=repro"" > $out/usr/lib/os-release
touch $out/usr/lib/systemd/system/repro.service
'';
};
in
stdenv.mkDerivation {
name = ""repro.img"";
nativeBuildInputs = [ squashfsTools ];
buildInputs = [ imageDir ];
buildCommand =
''
mksquashfs ${imageDir} $out \
-no-fragments \
-processors 1 \
-all-root \
-b 1048576 \
-comp xz \
-Xdict-size 100% \
'';
}
```
Run [`nix build -f repro.nix --out-link repro.img`](https://nixos.org/nix/).
Then:
```
$ file $(realpath repro.img)
/nix/store/cildzy70a6jgk5cq1d3mnbm7bg2q2wxh-repro.img: Squashfs filesystem, little endian, version 4.0, 493 bytes, 7 inodes, blocksize: 1048576 bytes, created: Thu Jan 1 00:00:01 1970
$ mkdir /tmp/repro
$ sudo mount repro.img /tmp/repro
$ sudo /usr/lib/systemd/portablectl inspect /tmp/repro
(Matching unit files with prefix 'repro'.)
Image:
/tmp/repro
Portable Service:
repro
Operating System:
n/a
Unit files:
repro.service
$ sudo /usr/lib/systemd/portablectl inspect ./repro.img
(Matching unit files with prefix 'repro.img'.)
Failed to inspect image metadata: Wrong medium type
```",1,protabled don t insist on raw suffix for raw images so strictly systemd version the issue has been seen with used distribution arch expected behaviour you didn t see i have a squashfs image image img if i mount that at tmp image then sudo lib systemd portablectl inspect tmp image reports metadata fine i expected sudo lib systemd portablectl inspect image img to work as well unexpected behaviour you saw the command printed failed to inspect image metadata wrong medium type portablectl attach also prints the wrong medium type message steps to reproduce the problem first build a squashfs image that reproduces the problem save the following to repro nix nix with import fetchtarball url let imagedir stdenv mkderivation name repro filesystem buildcommand mkdir p out usr lib mkdir p out usr lib systemd system echo portable pretty name repro out usr lib os release touch out usr lib systemd system repro service in stdenv mkderivation name repro img nativebuildinputs buildinputs buildcommand mksquashfs imagedir out no fragments processors all root b comp xz xdict size run then file realpath repro img nix store repro img squashfs filesystem little endian version bytes inodes blocksize bytes created thu jan mkdir tmp repro sudo mount repro img tmp repro sudo usr lib systemd portablectl inspect tmp repro matching unit files with prefix repro image tmp repro portable service repro operating system n a unit files repro service sudo usr lib systemd portablectl inspect repro img matching unit files with prefix repro img failed to inspect image metadata wrong medium type ,1
35198,2789835026.0,IssuesEvent,2015-05-08 21:48:06,google/google-visualization-api-issues,https://api.github.com/repos/google/google-visualization-api-issues,opened,Calendar function with date pull down and select,Priority-Low Type-Enhancement,"Original [issue 242](https://code.google.com/p/google-visualization-api-issues/issues/detail?id=242) created by orwant on 2010-04-01T19:11:24.000Z:
What would you like to see us add to this API?
Google spreadsheet that has a calendar function that sits in a individual
cell allowing date pull down and user date selection. Date chosen will
then show in cell.
What component is this issue related to (PieChart, LineChart, DataTable,
Query, etc)?
*********************************************************
For developers viewing this issue: please click the 'star' icon to be
notified of future changes, and to let us know how many of you are
interested in seeing it resolved.
*********************************************************
",1.0,"Calendar function with date pull down and select - Original [issue 242](https://code.google.com/p/google-visualization-api-issues/issues/detail?id=242) created by orwant on 2010-04-01T19:11:24.000Z:
What would you like to see us add to this API?
Google spreadsheet that has a calendar function that sits in a individual
cell allowing date pull down and user date selection. Date chosen will
then show in cell.
What component is this issue related to (PieChart, LineChart, DataTable,
Query, etc)?
*********************************************************
For developers viewing this issue: please click the 'star' icon to be
notified of future changes, and to let us know how many of you are
interested in seeing it resolved.
*********************************************************
",0,calendar function with date pull down and select original created by orwant on what would you like to see us add to this api google spreadsheet that has a calendar function that sits in a individual cell allowing date pull down and user date selection date chosen will then show in cell what component is this issue related to piechart linechart datatable query etc for developers viewing this issue please click the star icon to be notified of future changes and to let us know how many of you are interested in seeing it resolved ,0
1121,14421913967.0,IssuesEvent,2020-12-05 00:43:26,AzureAD/microsoft-authentication-library-for-dotnet,https://api.github.com/repos/AzureAD/microsoft-authentication-library-for-dotnet,closed,[Bug] Throw a better exception when ROPC is attempted with MSA account,Fixed P2 Supportability bug,"**Which Version of MSAL are you using ?**
4.22
Attempt ROPC, with authority correctly set to `l.m.o/organizations`. Provide an MSA username to the API.
**Expected** an exception explaining the root cause.
Implementation suggestion:
- We could use either the ""domain_name"" from the userrealm call (need to confirm with MSA folks)

- Or the fact that the subsequent call to `GET /FederationMetadata/2007-06/FederationMetadata.xml HTTP/1.1` fails with 406 Not Acceptable.
**Actual** a bad exception which causes developers to think there is a bug in MSAL
---> System.InvalidOperationException: Sequence contains no elements
at System.Linq.ThrowHelper.ThrowNoElementsException()
at System.Linq.Enumerable.First[TSource](IEnumerable`1 source)
at Microsoft.Identity.Client.WsTrust.MexDocument.SetPolicyEndpointAddresses(XContainer mexDocument)
at Microsoft.Identity.Client.WsTrust.MexDocument..ctor(String responseBody)
at Microsoft.Identity.Client.WsTrust.WsTrustWebRequestManager.GetMexDocumentAsync(String federationMetadataUrl, RequestContext requestContext)
at Microsoft.Identity.Client.WsTrust.CommonNonInteractiveHandler.PerformWsTrustMexExchangeAsync(String federationMetadataUrl, String cloudAudienceUrn, UserAuthType userAuthType, String username, SecureString password)
at Microsoft.Identity.Client.Internal.Requests.UsernamePasswordRequest.FetchAssertionFromWsTrustAsync()
at Microsoft.Identity.Client.Internal.Requests.UsernamePasswordRequest.ExecuteAsync(CancellationToken cancellationToken)
at Microsoft.Identity.Client.Internal.Requests.RequestBase.RunAsync(CancellationToken cancellationToken)
at Microsoft.Identity.Client.ApiConfig.Executors.PublicClientExecutor.ExecuteAsync(AcquireTokenCommonParameters commonParameters, AcquireTokenByUsernamePasswordParameters usernamePasswordParameters, CancellationToken cancellationToken)
at Azure.Identity.AbstractAcquireTokenParameterBuilderExtensions.ExecuteAsync[T](AbstractAcquireTokenParameterBuilder`1 builder, Boolean async, CancellationToken cancellationToken)
at Azure.Identity.MsalPublicClient.AcquireTokenByUsernamePasswordAsync(String[] scopes, String username, SecureString password, Boolean async, CancellationToken cancellationToken)
at Azure.Identity.UsernamePasswordCredential.GetTokenImplAsync(Boolean async, TokenRequestContext requestContext, CancellationToken cancellationToken)
--- End of inner exception stack trace ---
",True,"[Bug] Throw a better exception when ROPC is attempted with MSA account - **Which Version of MSAL are you using ?**
4.22
Attempt ROPC, with authority correctly set to `l.m.o/organizations`. Provide an MSA username to the API.
**Expected** an exception explaining the root cause.
Implementation suggestion:
- We could use either the ""domain_name"" from the userrealm call (need to confirm with MSA folks)

- Or the fact that the subsequent call to `GET /FederationMetadata/2007-06/FederationMetadata.xml HTTP/1.1` fails with 406 Not Acceptable.
**Actual** a bad exception which causes developers to think there is a bug in MSAL
---> System.InvalidOperationException: Sequence contains no elements
at System.Linq.ThrowHelper.ThrowNoElementsException()
at System.Linq.Enumerable.First[TSource](IEnumerable`1 source)
at Microsoft.Identity.Client.WsTrust.MexDocument.SetPolicyEndpointAddresses(XContainer mexDocument)
at Microsoft.Identity.Client.WsTrust.MexDocument..ctor(String responseBody)
at Microsoft.Identity.Client.WsTrust.WsTrustWebRequestManager.GetMexDocumentAsync(String federationMetadataUrl, RequestContext requestContext)
at Microsoft.Identity.Client.WsTrust.CommonNonInteractiveHandler.PerformWsTrustMexExchangeAsync(String federationMetadataUrl, String cloudAudienceUrn, UserAuthType userAuthType, String username, SecureString password)
at Microsoft.Identity.Client.Internal.Requests.UsernamePasswordRequest.FetchAssertionFromWsTrustAsync()
at Microsoft.Identity.Client.Internal.Requests.UsernamePasswordRequest.ExecuteAsync(CancellationToken cancellationToken)
at Microsoft.Identity.Client.Internal.Requests.RequestBase.RunAsync(CancellationToken cancellationToken)
at Microsoft.Identity.Client.ApiConfig.Executors.PublicClientExecutor.ExecuteAsync(AcquireTokenCommonParameters commonParameters, AcquireTokenByUsernamePasswordParameters usernamePasswordParameters, CancellationToken cancellationToken)
at Azure.Identity.AbstractAcquireTokenParameterBuilderExtensions.ExecuteAsync[T](AbstractAcquireTokenParameterBuilder`1 builder, Boolean async, CancellationToken cancellationToken)
at Azure.Identity.MsalPublicClient.AcquireTokenByUsernamePasswordAsync(String[] scopes, String username, SecureString password, Boolean async, CancellationToken cancellationToken)
at Azure.Identity.UsernamePasswordCredential.GetTokenImplAsync(Boolean async, TokenRequestContext requestContext, CancellationToken cancellationToken)
--- End of inner exception stack trace ---
",1, throw a better exception when ropc is attempted with msa account which version of msal are you using attempt ropc with authority correctly set to l m o organizations provide an msa username to the api expected an exception explaining the root cause implementation suggestion we could use either the domain name from the userrealm call need to confirm with msa folks or the fact that the subsequent call to get federationmetadata federationmetadata xml http fails with not acceptable actual a bad exception which causes developers to think there is a bug in msal system invalidoperationexception sequence contains no elements at system linq throwhelper thrownoelementsexception at system linq enumerable first ienumerable source at microsoft identity client wstrust mexdocument setpolicyendpointaddresses xcontainer mexdocument at microsoft identity client wstrust mexdocument ctor string responsebody at microsoft identity client wstrust wstrustwebrequestmanager getmexdocumentasync string federationmetadataurl requestcontext requestcontext at microsoft identity client wstrust commonnoninteractivehandler performwstrustmexexchangeasync string federationmetadataurl string cloudaudienceurn userauthtype userauthtype string username securestring password at microsoft identity client internal requests usernamepasswordrequest fetchassertionfromwstrustasync at microsoft identity client internal requests usernamepasswordrequest executeasync cancellationtoken cancellationtoken at microsoft identity client internal requests requestbase runasync cancellationtoken cancellationtoken at microsoft identity client apiconfig executors publicclientexecutor executeasync acquiretokencommonparameters commonparameters acquiretokenbyusernamepasswordparameters usernamepasswordparameters cancellationtoken cancellationtoken at azure identity abstractacquiretokenparameterbuilderextensions executeasync abstractacquiretokenparameterbuilder builder boolean async cancellationtoken cancellationtoken at azure identity msalpublicclient acquiretokenbyusernamepasswordasync string scopes string username securestring password boolean async cancellationtoken cancellationtoken at azure identity usernamepasswordcredential gettokenimplasync boolean async tokenrequestcontext requestcontext cancellationtoken cancellationtoken end of inner exception stack trace ,1
1559,22782815730.0,IssuesEvent,2022-07-08 22:22:42,verilator/verilator,https://api.github.com/repos/verilator/verilator,closed,V3Lexer_pregen.yy.cpp:8665:18: error: out-of-line definition of 'LexerInput' does not match any declaration in 'V3LexerBase'out-of-line definition of 'LexerInput',area: portability resolution: answered,"I am trying to compile verilator on macOS 12.4 Dependencies are installed through macports
```
$ /opt/local/bin/flex --version
flex 2.6.4
$ /opt/local/bin/bison --version
bison (GNU Bison) 3.8.2
```
The compilation fails with the following message:
```
In file included from ../V3ParseLex.cpp:28:
V3Lexer_pregen.yy.cpp:8665:18: error: out-of-line definition of 'LexerInput' does not match any declaration in 'V3LexerBase'
int yyFlexLexer::LexerInput( char* buf, int /* max_size */ )
^~~~~~~~~~
V3Lexer_pregen.yy.cpp:8694:19: error: out-of-line definition of 'LexerOutput' does not match any declaration in 'V3LexerBase'
void yyFlexLexer::LexerOutput( const char* buf, int size )
^~~~~~~~~~~
```
I think this is a flex/bison issue but I'm not sure what went wrong. Please help.
",True,"V3Lexer_pregen.yy.cpp:8665:18: error: out-of-line definition of 'LexerInput' does not match any declaration in 'V3LexerBase'out-of-line definition of 'LexerInput' - I am trying to compile verilator on macOS 12.4 Dependencies are installed through macports
```
$ /opt/local/bin/flex --version
flex 2.6.4
$ /opt/local/bin/bison --version
bison (GNU Bison) 3.8.2
```
The compilation fails with the following message:
```
In file included from ../V3ParseLex.cpp:28:
V3Lexer_pregen.yy.cpp:8665:18: error: out-of-line definition of 'LexerInput' does not match any declaration in 'V3LexerBase'
int yyFlexLexer::LexerInput( char* buf, int /* max_size */ )
^~~~~~~~~~
V3Lexer_pregen.yy.cpp:8694:19: error: out-of-line definition of 'LexerOutput' does not match any declaration in 'V3LexerBase'
void yyFlexLexer::LexerOutput( const char* buf, int size )
^~~~~~~~~~~
```
I think this is a flex/bison issue but I'm not sure what went wrong. Please help.
",1, pregen yy cpp error out of line definition of lexerinput does not match any declaration in out of line definition of lexerinput i am trying to compile verilator on macos dependencies are installed through macports opt local bin flex version flex opt local bin bison version bison gnu bison the compilation fails with the following message in file included from cpp pregen yy cpp error out of line definition of lexerinput does not match any declaration in int yyflexlexer lexerinput char buf int max size pregen yy cpp error out of line definition of lexeroutput does not match any declaration in void yyflexlexer lexeroutput const char buf int size i think this is a flex bison issue but i m not sure what went wrong please help ,1
1036,13220706504.0,IssuesEvent,2020-08-17 12:54:11,argoproj/argo-cd,https://api.github.com/repos/argoproj/argo-cd,closed,Improve error message when excluding https:// from the URL in configmap ,enhancement good first issue type:supportability,"# Summary
URL validation for argocd-server configmap and print a error message that the URL does not contain any protocol prefix.
# Motivation
When configuring Dex config for SSO in the configmap argocd-cm is it required to provide a URL to the ArgoCD instance.
If this URL does not contain explicitly the HTTPS protocol every SSO login attempt will generate a `Failed to query provider ""argocd-server.io/api/dex"": 400 Bad Request: 400 Bad Request` message and in the argocd-server log `time=""2020-08-14T12:19:03Z"" level=info msg=""Initializing OIDC provider (issuer: argocd-server.io/api/dex)""`.
These two log entries are not good enough for debugging what is wrong when in fact it is required that the URL must be formatted like this:
Example
BAD configmap
```
---
kind: ConfigMap
apiVersion: v1
metadata:
labels:
app.kubernetes.io/name: argocd-cm
app.kubernetes.io/part-of: argocd
name: argocd-cm
namespace: argocd
data:
url: argocd-server.io
dex.config: |
logger:
level: debug
format: json
connectors:
- type: saml
name: SAML
id: saml
config:
ssoURL: https://identity-provider/idp/profile/SAML2/POST/SSO
entityIssuer: https://argocd-server.io/api/dex
ca: /opt/cert/ca.pem
usernameAttr: name
emailAttr: email
groupsAttr: groups
```
GOOD configmap
```
---
kind: ConfigMap
apiVersion: v1
metadata:
labels:
app.kubernetes.io/name: argocd-cm
app.kubernetes.io/part-of: argocd
name: argocd-cm
namespace: argocd
data:
url: https://argocd-server.io
dex.config: |
logger:
level: debug
format: json
connectors:
- type: saml
name: SAML
id: saml
config:
ssoURL: https://identity-provider/idp/profile/SAML2/POST/SSO
entityIssuer: https://argocd-server.io/api/dex
ca: /opt/cert/ca.pem
usernameAttr: name
emailAttr: email
groupsAttr: groups
```
# Proposal
When argocd-server loads the configmap `argocd-cm` do a validation of the URL (data.url) provided and check if it has a protocol prefixed or not and log accordingly. ",True,"Improve error message when excluding https:// from the URL in configmap - # Summary
URL validation for argocd-server configmap and print a error message that the URL does not contain any protocol prefix.
# Motivation
When configuring Dex config for SSO in the configmap argocd-cm is it required to provide a URL to the ArgoCD instance.
If this URL does not contain explicitly the HTTPS protocol every SSO login attempt will generate a `Failed to query provider ""argocd-server.io/api/dex"": 400 Bad Request: 400 Bad Request` message and in the argocd-server log `time=""2020-08-14T12:19:03Z"" level=info msg=""Initializing OIDC provider (issuer: argocd-server.io/api/dex)""`.
These two log entries are not good enough for debugging what is wrong when in fact it is required that the URL must be formatted like this:
Example
BAD configmap
```
---
kind: ConfigMap
apiVersion: v1
metadata:
labels:
app.kubernetes.io/name: argocd-cm
app.kubernetes.io/part-of: argocd
name: argocd-cm
namespace: argocd
data:
url: argocd-server.io
dex.config: |
logger:
level: debug
format: json
connectors:
- type: saml
name: SAML
id: saml
config:
ssoURL: https://identity-provider/idp/profile/SAML2/POST/SSO
entityIssuer: https://argocd-server.io/api/dex
ca: /opt/cert/ca.pem
usernameAttr: name
emailAttr: email
groupsAttr: groups
```
GOOD configmap
```
---
kind: ConfigMap
apiVersion: v1
metadata:
labels:
app.kubernetes.io/name: argocd-cm
app.kubernetes.io/part-of: argocd
name: argocd-cm
namespace: argocd
data:
url: https://argocd-server.io
dex.config: |
logger:
level: debug
format: json
connectors:
- type: saml
name: SAML
id: saml
config:
ssoURL: https://identity-provider/idp/profile/SAML2/POST/SSO
entityIssuer: https://argocd-server.io/api/dex
ca: /opt/cert/ca.pem
usernameAttr: name
emailAttr: email
groupsAttr: groups
```
# Proposal
When argocd-server loads the configmap `argocd-cm` do a validation of the URL (data.url) provided and check if it has a protocol prefixed or not and log accordingly. ",1,improve error message when excluding https from the url in configmap summary url validation for argocd server configmap and print a error message that the url does not contain any protocol prefix motivation when configuring dex config for sso in the configmap argocd cm is it required to provide a url to the argocd instance if this url does not contain explicitly the https protocol every sso login attempt will generate a failed to query provider argocd server io api dex bad request bad request message and in the argocd server log time level info msg initializing oidc provider issuer argocd server io api dex these two log entries are not good enough for debugging what is wrong when in fact it is required that the url must be formatted like this example bad configmap kind configmap apiversion metadata labels app kubernetes io name argocd cm app kubernetes io part of argocd name argocd cm namespace argocd data url argocd server io dex config logger level debug format json connectors type saml name saml id saml config ssourl entityissuer ca opt cert ca pem usernameattr name emailattr email groupsattr groups good configmap kind configmap apiversion metadata labels app kubernetes io name argocd cm app kubernetes io part of argocd name argocd cm namespace argocd data url dex config logger level debug format json connectors type saml name saml id saml config ssourl entityissuer ca opt cert ca pem usernameattr name emailattr email groupsattr groups proposal when argocd server loads the configmap argocd cm do a validation of the url data url provided and check if it has a protocol prefixed or not and log accordingly ,1
20450,4544456807.0,IssuesEvent,2016-09-10 18:07:20,harningt/luajson,https://api.github.com/repos/harningt/luajson,opened,Update README to remove non-present functionality,documentation,At least one function is mentioned in the readme that is non-present: isEncodable.,1.0,Update README to remove non-present functionality - At least one function is mentioned in the readme that is non-present: isEncodable.,0,update readme to remove non present functionality at least one function is mentioned in the readme that is non present isencodable ,0
58596,14292097415.0,IssuesEvent,2020-11-24 00:12:19,devtron-labs/devtron,https://api.github.com/repos/devtron-labs/devtron,opened,"CVE-2019-18658 (High) detected in k8s.io/helm/pkg/chartutil-eecf22f77df5f65c823aacd2dbd30ae6c65f186e, k8s.io/helm/pkg/sympath-eecf22f77df5f65c823aacd2dbd30ae6c65f186e",security vulnerability,"## CVE-2019-18658 - High Severity Vulnerability
Vulnerable Libraries - k8s.io/helm/pkg/chartutil-eecf22f77df5f65c823aacd2dbd30ae6c65f186e , k8s.io/helm/pkg/sympath-eecf22f77df5f65c823aacd2dbd30ae6c65f186e
k8s.io/helm/pkg/chartutil-eecf22f77df5f65c823aacd2dbd30ae6c65f186e
The Kubernetes Package Manager
Dependency Hierarchy:
- :x: **k8s.io/helm/pkg/chartutil-eecf22f77df5f65c823aacd2dbd30ae6c65f186e** (Vulnerable Library)
k8s.io/helm/pkg/sympath-eecf22f77df5f65c823aacd2dbd30ae6c65f186e
The Kubernetes Package Manager
Dependency Hierarchy:
- k8s.io/helm/pkg/chartutil-eecf22f77df5f65c823aacd2dbd30ae6c65f186e (Root Library)
- :x: **k8s.io/helm/pkg/sympath-eecf22f77df5f65c823aacd2dbd30ae6c65f186e** (Vulnerable Library)
Found in HEAD commit: f7db3d4b83b1d3b0008f56e9a649b36ed2ae830d
Found in base branch: main
Vulnerability Details
In Helm 2.x before 2.15.2, commands that deal with loading a chart as a directory or packaging a chart provide an opportunity for a maliciously designed chart to include sensitive content such as /etc/passwd, or to execute a denial of service (DoS) via a special file such as /dev/urandom, via symlinks. No version of Tiller is known to be impacted. This is a client-only issue.
Publish Date: 2019-11-12
URL: CVE-2019-18658
CVSS 3 Score Details (9.8 )
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
For more information on CVSS3 Scores, click here .
Suggested Fix
Type: Upgrade version
Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-18658
Release Date: 2019-11-12
Fix Resolution: v2.15.2
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2019-18658 (High) detected in k8s.io/helm/pkg/chartutil-eecf22f77df5f65c823aacd2dbd30ae6c65f186e, k8s.io/helm/pkg/sympath-eecf22f77df5f65c823aacd2dbd30ae6c65f186e - ## CVE-2019-18658 - High Severity Vulnerability
Vulnerable Libraries - k8s.io/helm/pkg/chartutil-eecf22f77df5f65c823aacd2dbd30ae6c65f186e , k8s.io/helm/pkg/sympath-eecf22f77df5f65c823aacd2dbd30ae6c65f186e
k8s.io/helm/pkg/chartutil-eecf22f77df5f65c823aacd2dbd30ae6c65f186e
The Kubernetes Package Manager
Dependency Hierarchy:
- :x: **k8s.io/helm/pkg/chartutil-eecf22f77df5f65c823aacd2dbd30ae6c65f186e** (Vulnerable Library)
k8s.io/helm/pkg/sympath-eecf22f77df5f65c823aacd2dbd30ae6c65f186e
The Kubernetes Package Manager
Dependency Hierarchy:
- k8s.io/helm/pkg/chartutil-eecf22f77df5f65c823aacd2dbd30ae6c65f186e (Root Library)
- :x: **k8s.io/helm/pkg/sympath-eecf22f77df5f65c823aacd2dbd30ae6c65f186e** (Vulnerable Library)
Found in HEAD commit: f7db3d4b83b1d3b0008f56e9a649b36ed2ae830d
Found in base branch: main
Vulnerability Details
In Helm 2.x before 2.15.2, commands that deal with loading a chart as a directory or packaging a chart provide an opportunity for a maliciously designed chart to include sensitive content such as /etc/passwd, or to execute a denial of service (DoS) via a special file such as /dev/urandom, via symlinks. No version of Tiller is known to be impacted. This is a client-only issue.
Publish Date: 2019-11-12
URL: CVE-2019-18658
CVSS 3 Score Details (9.8 )
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
For more information on CVSS3 Scores, click here .
Suggested Fix
Type: Upgrade version
Origin: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-18658
Release Date: 2019-11-12
Fix Resolution: v2.15.2
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve high detected in io helm pkg chartutil io helm pkg sympath cve high severity vulnerability vulnerable libraries io helm pkg chartutil io helm pkg sympath io helm pkg chartutil the kubernetes package manager dependency hierarchy x io helm pkg chartutil vulnerable library io helm pkg sympath the kubernetes package manager dependency hierarchy io helm pkg chartutil root library x io helm pkg sympath vulnerable library found in head commit a href found in base branch main vulnerability details in helm x before commands that deal with loading a chart as a directory or packaging a chart provide an opportunity for a maliciously designed chart to include sensitive content such as etc passwd or to execute a denial of service dos via a special file such as dev urandom via symlinks no version of tiller is known to be impacted this is a client only issue publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource ,0
1875,27880633192.0,IssuesEvent,2023-03-21 19:06:09,argoproj/argo-cd,https://api.github.com/repos/argoproj/argo-cd,closed,Expose `--kubectl-parallelism-limit` as an environment variable,enhancement type:supportability component:distribution,"# Summary
It looks like [not all of the command line flags](https://github.com/argoproj/argo-cd/blob/master/cmd/argocd-application-controller/commands/argocd_application_controller.go#L153-L167) can be passed as environment variables. In particular I'd like to pass the `--kubectl-parallelism-limit` flag as an environment variable.
# Motivation
`--kubectl-parallelism-limit` can be useful to limit memory usage of ArgoCD and prevent OOM deaths when it suddenly syncs a bunch of applications.
We're using kustomize to pull in [the HA manifests](https://github.com/argoproj/argo-cd/tree/master/manifests/ha) and would prefer to just be able to set this with environment variables so we don't need to keep the command in line when overwriting it with kustomize.
# Proposal
[`kubectlParallelismLimit`](https://github.com/argoproj/argo-cd/blob/e01ab05d555b799cba08097710482afac3be601e/cmd/argocd-application-controller/commands/argocd_application_controller.go#L164) doesn't currently have a default that pulls from an environment variable like many of the other config options do so the simplest thing to do is to change it to:
```go
command.Flags().Int64Var(&kubectlParallelismLimit, ""kubectl-parallelism-limit"", env.ParseNumFromEnv(""ARGOCD_APPLICATION_CONTROLLER_KUBECTL_PARALLELISM_LIMIT"", 20, 0, math.MaxInt64), ""Number of allowed concurrent kubectl fork/execs. Any value less the 1 means no limit."")
```",True,"Expose `--kubectl-parallelism-limit` as an environment variable - # Summary
It looks like [not all of the command line flags](https://github.com/argoproj/argo-cd/blob/master/cmd/argocd-application-controller/commands/argocd_application_controller.go#L153-L167) can be passed as environment variables. In particular I'd like to pass the `--kubectl-parallelism-limit` flag as an environment variable.
# Motivation
`--kubectl-parallelism-limit` can be useful to limit memory usage of ArgoCD and prevent OOM deaths when it suddenly syncs a bunch of applications.
We're using kustomize to pull in [the HA manifests](https://github.com/argoproj/argo-cd/tree/master/manifests/ha) and would prefer to just be able to set this with environment variables so we don't need to keep the command in line when overwriting it with kustomize.
# Proposal
[`kubectlParallelismLimit`](https://github.com/argoproj/argo-cd/blob/e01ab05d555b799cba08097710482afac3be601e/cmd/argocd-application-controller/commands/argocd_application_controller.go#L164) doesn't currently have a default that pulls from an environment variable like many of the other config options do so the simplest thing to do is to change it to:
```go
command.Flags().Int64Var(&kubectlParallelismLimit, ""kubectl-parallelism-limit"", env.ParseNumFromEnv(""ARGOCD_APPLICATION_CONTROLLER_KUBECTL_PARALLELISM_LIMIT"", 20, 0, math.MaxInt64), ""Number of allowed concurrent kubectl fork/execs. Any value less the 1 means no limit."")
```",1,expose kubectl parallelism limit as an environment variable summary it looks like can be passed as environment variables in particular i d like to pass the kubectl parallelism limit flag as an environment variable motivation kubectl parallelism limit can be useful to limit memory usage of argocd and prevent oom deaths when it suddenly syncs a bunch of applications we re using kustomize to pull in and would prefer to just be able to set this with environment variables so we don t need to keep the command in line when overwriting it with kustomize proposal doesn t currently have a default that pulls from an environment variable like many of the other config options do so the simplest thing to do is to change it to go command flags kubectlparallelismlimit kubectl parallelism limit env parsenumfromenv argocd application controller kubectl parallelism limit math number of allowed concurrent kubectl fork execs any value less the means no limit ,1
1196,15436267105.0,IssuesEvent,2021-03-07 12:22:43,openwall/john,https://api.github.com/repos/openwall/john,opened,SIPdump uses a deprecated PCAP function,portability,"Seen when using latest'n'greatest macOS SDK
```
SIPdump.c:228:10: warning: 'pcap_lookupdev' is deprecated: use 'pcap_findalldevs' and use the first device
[-Wdeprecated-declarations]
dev = pcap_lookupdev(errbuf);
^
```",True,"SIPdump uses a deprecated PCAP function - Seen when using latest'n'greatest macOS SDK
```
SIPdump.c:228:10: warning: 'pcap_lookupdev' is deprecated: use 'pcap_findalldevs' and use the first device
[-Wdeprecated-declarations]
dev = pcap_lookupdev(errbuf);
^
```",1,sipdump uses a deprecated pcap function seen when using latest n greatest macos sdk sipdump c warning pcap lookupdev is deprecated use pcap findalldevs and use the first device dev pcap lookupdev errbuf ,1
1034,13166272259.0,IssuesEvent,2020-08-11 08:15:56,esrlabs/northstar,https://api.github.com/repos/esrlabs/northstar,closed,Create a shell-script that test for required kernel features,linux portability,"relates to #66
The idea is that Peter will create such a script that actually tries to make use of the required kernel features and reports what is not working.
Once we got this, we can integrate it into the north daemon so it can be executed when the north configuration is set appropriately.
",True,"Create a shell-script that test for required kernel features - relates to #66
The idea is that Peter will create such a script that actually tries to make use of the required kernel features and reports what is not working.
Once we got this, we can integrate it into the north daemon so it can be executed when the north configuration is set appropriately.
",1,create a shell script that test for required kernel features relates to the idea is that peter will create such a script that actually tries to make use of the required kernel features and reports what is not working once we got this we can integrate it into the north daemon so it can be executed when the north configuration is set appropriately ,1
140,3511063010.0,IssuesEvent,2016-01-09 23:59:32,PCSX2/pcsx2,https://api.github.com/repos/PCSX2/pcsx2,closed,GSdx opengl is not compatible with virtual box / vmware (linux guest),Enhancement OS: Linux Plugin: GSdx Portability,"It is hard for windows dev to test their codes. Unfortunately VMs are often limited to a very old opengl version.
Maybe future version of Mesa will export opengl 3.3. Until then, it would be nice to support GSdx SW rendering on the very old openGL 2.1 (OMG imaging asking DX9 code to a windows dev).
Here a status report on virtualbox / debian jessy.
```
libGL error: pci id for fd 4: 80ee:beef, driver (null)
OpenGL Warning: Failed to connect to host. Make sure 3D acceleration is enabled for this VM.
libGL error: core dri or dri2 extension not found
libGL error: failed to load driver: vboxvideo
server glx vendor string: SGI
server glx version string: 1.4
client glx vendor string: Mesa Project and SGI
client glx version string: 1.4
OpenGL vendor string: VMware, Inc.
OpenGL renderer string: Gallium 0.4 on llvmpipe (LLVM 3.5, 128 bits)
OpenGL version string: 3.0 Mesa 10.3.2
OpenGL shading language version string: 1.30
```",True,"GSdx opengl is not compatible with virtual box / vmware (linux guest) - It is hard for windows dev to test their codes. Unfortunately VMs are often limited to a very old opengl version.
Maybe future version of Mesa will export opengl 3.3. Until then, it would be nice to support GSdx SW rendering on the very old openGL 2.1 (OMG imaging asking DX9 code to a windows dev).
Here a status report on virtualbox / debian jessy.
```
libGL error: pci id for fd 4: 80ee:beef, driver (null)
OpenGL Warning: Failed to connect to host. Make sure 3D acceleration is enabled for this VM.
libGL error: core dri or dri2 extension not found
libGL error: failed to load driver: vboxvideo
server glx vendor string: SGI
server glx version string: 1.4
client glx vendor string: Mesa Project and SGI
client glx version string: 1.4
OpenGL vendor string: VMware, Inc.
OpenGL renderer string: Gallium 0.4 on llvmpipe (LLVM 3.5, 128 bits)
OpenGL version string: 3.0 Mesa 10.3.2
OpenGL shading language version string: 1.30
```",1,gsdx opengl is not compatible with virtual box vmware linux guest it is hard for windows dev to test their codes unfortunately vms are often limited to a very old opengl version maybe future version of mesa will export opengl until then it would be nice to support gsdx sw rendering on the very old opengl omg imaging asking code to a windows dev here a status report on virtualbox debian jessy libgl error pci id for fd beef driver null opengl warning failed to connect to host make sure acceleration is enabled for this vm libgl error core dri or extension not found libgl error failed to load driver vboxvideo server glx vendor string sgi server glx version string client glx vendor string mesa project and sgi client glx version string opengl vendor string vmware inc opengl renderer string gallium on llvmpipe llvm bits opengl version string mesa opengl shading language version string ,1
119926,17643963481.0,IssuesEvent,2021-08-20 01:20:06,shantanujhalt/material-ui,https://api.github.com/repos/shantanujhalt/material-ui,closed,"WS-2021-0154 (Medium) detected in glob-parent-5.1.1.tgz, glob-parent-3.1.0.tgz - autoclosed",security vulnerability,"## WS-2021-0154 - Medium Severity Vulnerability
Vulnerable Libraries - glob-parent-5.1.1.tgz , glob-parent-3.1.0.tgz
glob-parent-5.1.1.tgz
Extract the non-magic parent path from a glob string.
Library home page: https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.1.tgz
Path to dependency file: material-ui/node_modules/glob-parent/package.json
Path to vulnerable library: material-ui/node_modules/glob-parent/package.json
Dependency Hierarchy:
- lerna-3.22.1.tgz (Root Library)
- link-3.21.0.tgz
- command-3.21.0.tgz
- project-3.21.0.tgz
- :x: **glob-parent-5.1.1.tgz** (Vulnerable Library)
glob-parent-3.1.0.tgz
Strips glob magic from a string to provide the parent directory path
Library home page: https://registry.npmjs.org/glob-parent/-/glob-parent-3.1.0.tgz
Path to dependency file: material-ui/node_modules/glob-parent/package.json
Path to vulnerable library: material-ui/node_modules/glob-parent/package.json
Dependency Hierarchy:
- cpy-cli-3.1.1.tgz (Root Library)
- cpy-8.1.1.tgz
- globby-9.2.0.tgz
- fast-glob-2.2.7.tgz
- :x: **glob-parent-3.1.0.tgz** (Vulnerable Library)
Found in base branch: next
Vulnerability Details
Regular Expression Denial of Service (ReDoS) vulnerability was found in glob-parent before 5.1.2.
Publish Date: 2021-01-27
URL: WS-2021-0154
CVSS 3 Score Details (5.3 )
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
For more information on CVSS3 Scores, click here .
Suggested Fix
Type: Upgrade version
Origin: https://github.com/gulpjs/glob-parent/releases/tag/v5.1.2
Release Date: 2021-01-27
Fix Resolution: glob-parent - 5.1.2
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"WS-2021-0154 (Medium) detected in glob-parent-5.1.1.tgz, glob-parent-3.1.0.tgz - autoclosed - ## WS-2021-0154 - Medium Severity Vulnerability
Vulnerable Libraries - glob-parent-5.1.1.tgz , glob-parent-3.1.0.tgz
glob-parent-5.1.1.tgz
Extract the non-magic parent path from a glob string.
Library home page: https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.1.tgz
Path to dependency file: material-ui/node_modules/glob-parent/package.json
Path to vulnerable library: material-ui/node_modules/glob-parent/package.json
Dependency Hierarchy:
- lerna-3.22.1.tgz (Root Library)
- link-3.21.0.tgz
- command-3.21.0.tgz
- project-3.21.0.tgz
- :x: **glob-parent-5.1.1.tgz** (Vulnerable Library)
glob-parent-3.1.0.tgz
Strips glob magic from a string to provide the parent directory path
Library home page: https://registry.npmjs.org/glob-parent/-/glob-parent-3.1.0.tgz
Path to dependency file: material-ui/node_modules/glob-parent/package.json
Path to vulnerable library: material-ui/node_modules/glob-parent/package.json
Dependency Hierarchy:
- cpy-cli-3.1.1.tgz (Root Library)
- cpy-8.1.1.tgz
- globby-9.2.0.tgz
- fast-glob-2.2.7.tgz
- :x: **glob-parent-3.1.0.tgz** (Vulnerable Library)
Found in base branch: next
Vulnerability Details
Regular Expression Denial of Service (ReDoS) vulnerability was found in glob-parent before 5.1.2.
Publish Date: 2021-01-27
URL: WS-2021-0154
CVSS 3 Score Details (5.3 )
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
For more information on CVSS3 Scores, click here .
Suggested Fix
Type: Upgrade version
Origin: https://github.com/gulpjs/glob-parent/releases/tag/v5.1.2
Release Date: 2021-01-27
Fix Resolution: glob-parent - 5.1.2
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,ws medium detected in glob parent tgz glob parent tgz autoclosed ws medium severity vulnerability vulnerable libraries glob parent tgz glob parent tgz glob parent tgz extract the non magic parent path from a glob string library home page a href path to dependency file material ui node modules glob parent package json path to vulnerable library material ui node modules glob parent package json dependency hierarchy lerna tgz root library link tgz command tgz project tgz x glob parent tgz vulnerable library glob parent tgz strips glob magic from a string to provide the parent directory path library home page a href path to dependency file material ui node modules glob parent package json path to vulnerable library material ui node modules glob parent package json dependency hierarchy cpy cli tgz root library cpy tgz globby tgz fast glob tgz x glob parent tgz vulnerable library found in base branch next vulnerability details regular expression denial of service redos vulnerability was found in glob parent before publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution glob parent step up your open source security game with whitesource ,0
30,2679425009.0,IssuesEvent,2015-03-26 16:38:04,dotnet/roslyn,https://api.github.com/repos/dotnet/roslyn,closed,Remove binary serialization from the compiler,Area-Compilers core-clr Enhancement Portability,This is all Serializable types. Incompatible with CoreCLR. Currently the only use is CodeLens.,True,Remove binary serialization from the compiler - This is all Serializable types. Incompatible with CoreCLR. Currently the only use is CodeLens.,1,remove binary serialization from the compiler this is all serializable types incompatible with coreclr currently the only use is codelens ,1
1561,22832265917.0,IssuesEvent,2022-07-12 13:52:33,elastic/elasticsearch,https://api.github.com/repos/elastic/elasticsearch,closed,Log snapshot restores at INFO level,>enhancement :Distributed/Snapshot/Restore Team:Distributed Supportability,"Today we don't record the start or end of a snapshot restore in the server logs by default (or even at `DEBUG` level). In contrast we do record regular index creation, and also the start and end of every snapshot. Snapshot restores are relatively rare, and it would definitely be helpful to see them in the logs.
I think we wouldn't want to see restores related to mounting searchable snapshots, nor creation of CCR followers, so we'd need a `silent` parameter rather like we have for index creation requests. I think we would usually want to see details of the restore request (e.g. the index pattern) in the log message.",True,"Log snapshot restores at INFO level - Today we don't record the start or end of a snapshot restore in the server logs by default (or even at `DEBUG` level). In contrast we do record regular index creation, and also the start and end of every snapshot. Snapshot restores are relatively rare, and it would definitely be helpful to see them in the logs.
I think we wouldn't want to see restores related to mounting searchable snapshots, nor creation of CCR followers, so we'd need a `silent` parameter rather like we have for index creation requests. I think we would usually want to see details of the restore request (e.g. the index pattern) in the log message.",1,log snapshot restores at info level today we don t record the start or end of a snapshot restore in the server logs by default or even at debug level in contrast we do record regular index creation and also the start and end of every snapshot snapshot restores are relatively rare and it would definitely be helpful to see them in the logs i think we wouldn t want to see restores related to mounting searchable snapshots nor creation of ccr followers so we d need a silent parameter rather like we have for index creation requests i think we would usually want to see details of the restore request e g the index pattern in the log message ,1
1330,18682510048.0,IssuesEvent,2021-11-01 08:09:38,primefaces/primeng,https://api.github.com/repos/primefaces/primeng,closed,Multiselect missing itemValue in OnChange callback after removeChip,enhancement LTS-PORTABLE,"[x ] bug report => Search github for a similar issue or PR before submitting
[ ] feature request => Please check if request is not on the roadmap already https://github.com/primefaces/primeng/wiki/Roadmap
[ ] support request => Please do not submit support request here, instead see http://forum.primefaces.org/viewforum.php?f=35
**Current behavior**
When we unselect an item from the multi select by clicking on the remove chip, the callback on the **onChange** is missing the **ItemValue** from the object.
```
removeChip(chip: any, event: MouseEvent) {
.....
this.onChange.emit({ originalEvent: event, value: this.value });
```
When the item is clicked (onOptionClick) item returns the onChange return correctly the item:
```
onOptionClick(event) {
......
this.onChange.emit({originalEvent: event.originalEvent, value: this.value, itemValue: optionValue});
```
**Expected behavior**
The remove chip and click on the item, should have the same behavior on the OnChange event.
**Minimal reproduction of the problem with instructions**
My proposed solution:
```
removeChip(chip: any, event: MouseEvent) {
this.value = this.value.filter(val => !ObjectUtils.equals(val, chip, this.dataKey));
this.onModelChange(this.value);
let optionValue = this.getOptionValue(chip);
this.onChange.emit({ originalEvent: event, value: this.value, itemValue: optionValue });
this.updateLabel();
this.updateFilledState();
}
```
* **Angular version:** 11.1.1
* **PrimeNG version:** 11.3
Thanks.",True,"Multiselect missing itemValue in OnChange callback after removeChip - [x ] bug report => Search github for a similar issue or PR before submitting
[ ] feature request => Please check if request is not on the roadmap already https://github.com/primefaces/primeng/wiki/Roadmap
[ ] support request => Please do not submit support request here, instead see http://forum.primefaces.org/viewforum.php?f=35
**Current behavior**
When we unselect an item from the multi select by clicking on the remove chip, the callback on the **onChange** is missing the **ItemValue** from the object.
```
removeChip(chip: any, event: MouseEvent) {
.....
this.onChange.emit({ originalEvent: event, value: this.value });
```
When the item is clicked (onOptionClick) item returns the onChange return correctly the item:
```
onOptionClick(event) {
......
this.onChange.emit({originalEvent: event.originalEvent, value: this.value, itemValue: optionValue});
```
**Expected behavior**
The remove chip and click on the item, should have the same behavior on the OnChange event.
**Minimal reproduction of the problem with instructions**
My proposed solution:
```
removeChip(chip: any, event: MouseEvent) {
this.value = this.value.filter(val => !ObjectUtils.equals(val, chip, this.dataKey));
this.onModelChange(this.value);
let optionValue = this.getOptionValue(chip);
this.onChange.emit({ originalEvent: event, value: this.value, itemValue: optionValue });
this.updateLabel();
this.updateFilledState();
}
```
* **Angular version:** 11.1.1
* **PrimeNG version:** 11.3
Thanks.",1,multiselect missing itemvalue in onchange callback after removechip bug report search github for a similar issue or pr before submitting feature request please check if request is not on the roadmap already support request please do not submit support request here instead see current behavior when we unselect an item from the multi select by clicking on the remove chip the callback on the onchange is missing the itemvalue from the object removechip chip any event mouseevent this onchange emit originalevent event value this value when the item is clicked onoptionclick item returns the onchange return correctly the item onoptionclick event this onchange emit originalevent event originalevent value this value itemvalue optionvalue expected behavior the remove chip and click on the item should have the same behavior on the onchange event minimal reproduction of the problem with instructions my proposed solution removechip chip any event mouseevent this value this value filter val objectutils equals val chip this datakey this onmodelchange this value let optionvalue this getoptionvalue chip this onchange emit originalevent event value this value itemvalue optionvalue this updatelabel this updatefilledstate angular version primeng version thanks ,1
1230,16342338940.0,IssuesEvent,2021-05-13 00:04:44,inspec/inspec,https://api.github.com/repos/inspec/inspec,closed,interface resource: cannot read IP addresses on CoreOS,Aspect: Portability Component: Core Resources Stale,"
Resource `interface`, 'it exist` is the only flavor that works. ipv4_address, state UP/Down are not recognised.
# Describe the problem
on system
```
inspec detect --target ssh://vesop@10.19.10.121
────────────────────────────── Platform Details ──────────────────────────────
Name: coreos
Families: linux, unix, os
Release: 1855.4.0
Arch: x86_64
```
```sh
title 'RE node deployed'
control ""DEPLOYED-1.0 Mgmt Interface mgmt0"" do
impact 1.0
title ""Interface mgmt0""
describe interface('mgmt0') do
it { should exist }
its('ipv4_addresses') { should include '172.31' }
its('ipv4_addresses') { should include '10.19.10' }
it { should have_an_ipv4_address }
end
# hostname -I is not supported, -i returns only ipv6
describe command(""hostname -i"") do
its('stdout') { should match (/fe80::92e2:baff:fe7b:2441/) }
end
end
```
endup with
```
× DEPLOYED-1.0 Mgmt Interface mgmt0: Interface mgmt0 (4 failed)
✔ Interface mgmt0 should exist
× Interface mgmt0 should have an ipv4 address
expected Interface mgmt0 to respond to `has_an_ipv4_address?`
× Interface mgmt0 ipv4_addresses should include ""172.31""
expected [] to include ""172.31""
× Interface mgmt0 ipv4_addresses should include ""10.19.10""
expected [] to include ""10.19.10""
× Command: `hostname -I` stdout should match /fe80::92e2:baff:fe7b:2441/
expected """" to match /fe80::92e2:baff:fe7b:2441/
Diff:
@@ -1,2 +1,2 @@
-/fe80::92e2:baff:fe7b:2441/
+""""
```
On shell it says:
```
inspec> os.interface_info
NoMethodError: undefined method `interface_info' for Operating System Detection:#
from (pry):12:in `load_with_context'
```
",True,"interface resource: cannot read IP addresses on CoreOS -
Resource `interface`, 'it exist` is the only flavor that works. ipv4_address, state UP/Down are not recognised.
# Describe the problem
on system
```
inspec detect --target ssh://vesop@10.19.10.121
────────────────────────────── Platform Details ──────────────────────────────
Name: coreos
Families: linux, unix, os
Release: 1855.4.0
Arch: x86_64
```
```sh
title 'RE node deployed'
control ""DEPLOYED-1.0 Mgmt Interface mgmt0"" do
impact 1.0
title ""Interface mgmt0""
describe interface('mgmt0') do
it { should exist }
its('ipv4_addresses') { should include '172.31' }
its('ipv4_addresses') { should include '10.19.10' }
it { should have_an_ipv4_address }
end
# hostname -I is not supported, -i returns only ipv6
describe command(""hostname -i"") do
its('stdout') { should match (/fe80::92e2:baff:fe7b:2441/) }
end
end
```
endup with
```
× DEPLOYED-1.0 Mgmt Interface mgmt0: Interface mgmt0 (4 failed)
✔ Interface mgmt0 should exist
× Interface mgmt0 should have an ipv4 address
expected Interface mgmt0 to respond to `has_an_ipv4_address?`
× Interface mgmt0 ipv4_addresses should include ""172.31""
expected [] to include ""172.31""
× Interface mgmt0 ipv4_addresses should include ""10.19.10""
expected [] to include ""10.19.10""
× Command: `hostname -I` stdout should match /fe80::92e2:baff:fe7b:2441/
expected """" to match /fe80::92e2:baff:fe7b:2441/
Diff:
@@ -1,2 +1,2 @@
-/fe80::92e2:baff:fe7b:2441/
+""""
```
On shell it says:
```
inspec> os.interface_info
NoMethodError: undefined method `interface_info' for Operating System Detection:#
from (pry):12:in `load_with_context'
```
",1,interface resource cannot read ip addresses on coreos resource interface it exist is the only flavor that works address state up down are not recognised describe the problem on system inspec detect target ssh vesop ────────────────────────────── platform details ────────────────────────────── name coreos families linux unix os release arch sh title re node deployed control deployed mgmt interface do impact title interface describe interface do it should exist its addresses should include its addresses should include it should have an address end hostname i is not supported i returns only describe command hostname i do its stdout should match baff end end endup with × deployed mgmt interface interface failed ✔ interface should exist × interface should have an address expected interface to respond to has an address × interface addresses should include expected to include × interface addresses should include expected to include × command hostname i stdout should match baff expected to match baff diff baff on shell it says inspec os interface info nomethoderror undefined method interface info for operating system detection from pry in load with context ,1
33609,9196691432.0,IssuesEvent,2019-03-07 07:58:03,Microsoft/WindowsTemplateStudio,https://api.github.com/repos/Microsoft/WindowsTemplateStudio,closed,Build dev.templates.tests.full_20190306.5 failed,bug vsts-build,"## Build dev.templates.tests.full_20190306.5
- **Build result:** `failed`
- **Build queued:** 3/6/2019 10:53:38 PM
- **Build duration:** 48.41 minutes
### Details
Build [dev.templates.tests.full_20190306.5](https://winappstudio.visualstudio.com/web/build.aspx?pcguid=a4ef43be-68ce-4195-a619-079b4d9834c2&builduri=vstfs%3a%2f%2f%2fBuild%2fBuild%2f27205) failed
+ xunit.console.exe : GenerateAllPagesAndFeaturesAndCheckWithSonarLintAsync(projectType: ""Blank"", framework:
SupportedFramework { Name = ""MVVMBasic"", Type = FrontEnd }, platform: ""Uwp"") [FAIL]
At pbatch:27 char:27
+
+ CategoryInfo : NotSpecified: ( GenerateAll...: ""Uwp"") [FAIL]:String) [], RemoteException
+ FullyQualifiedErrorId : NativeCommandError
+ PSComputerName : [localhost]
GenerateAllPagesAndFeaturesAndCheckWithSonarLintAsync(projectType: ""Blank"", framework: SupportedFramework { Name =
""MVVMLight"", Type = FrontEnd }, platform: ""Uwp"") [FAIL]
GenerateAllPagesAndFeaturesAndCheckWithSonarLintAsync(projectType: ""Blank"", framework: SupportedFramework { Name =
""CodeBehind"", Type = FrontEnd }, platform: ""Uwp"") [FAIL]
GenerateAllPagesAndFeaturesAndCheckWithSonarLintAsync(projectType: ""SplitView"", framework: SupportedFramework {
Name = ""MVVMBasic"", Type = FrontEnd }, platform: ""Uwp"") [FAIL]
GenerateAllPagesAndFeaturesAndCheckWithSonarLintAsync(projectType: ""SplitView"", framework: SupportedFramework {
Name = ""MVVMLight"", Type = FrontEnd }, platform: ""Uwp"") [FAIL]
GenerateAllPagesAndFeaturesAndCheckWithSonarLintAsync(projectType: ""SplitView"", framework: SupportedFramework {
Name = ""CodeBehind"", Type = FrontEnd }, platform: ""Uwp"") [FAIL]
GenerateAllPagesAndFeaturesAndCheckWithSonarLintAsync(projectType: ""TabbedNav"", framework: SupportedFramework {
Name = ""MVVMBasic"", Type = FrontEnd }, platform: ""Uwp"") [FAIL]
GenerateAllPagesAndFeaturesAndCheckWithSonarLintAsync(projectType: ""TabbedNav"", framework: SupportedFramework {
Name = ""MVVMLight"", Type = FrontEnd }, platform: ""Uwp"") [FAIL]
GenerateAllPagesAndFeaturesAndCheckWithSonarLintAsync(projectType: ""TabbedNav"", framework: SupportedFramework {
Name = ""CodeBehind"", Type = FrontEnd }, platform: ""Uwp"") [FAIL]
GenerateAllPagesAndFeaturesAndCheckWithVBStyleAsync(projectType: ""Blank"", framework: SupportedFramework { Name =
""MVVMBasic"", Type = FrontEnd }, platform: ""Uwp"") [FAIL]
GenerateAllPagesAndFeaturesAndCheckWithVBStyleAsync(projectType: ""Blank"", framework: SupportedFramework { Name =
""MVVMLight"", Type = FrontEnd }, platform: ""Uwp"") [FAIL]
GenerateAllPagesAndFeaturesAndCheckWithVBStyleAsync(projectType: ""Blank"", framework: SupportedFramework { Name =
""CodeBehind"", Type = FrontEnd }, platform: ""Uwp"") [FAIL]
GenerateAllPagesAndFeaturesAndCheckWithVBStyleAsync(projectType: ""SplitView"", framework: SupportedFramework { Name
= ""MVVMBasic"", Type = FrontEnd }, platform: ""Uwp"") [FAIL]
GenerateAllPagesAndFeaturesAndCheckWithVBStyleAsync(projectType: ""SplitView"", framework: SupportedFramework { Name
= ""MVVMLight"", Type = FrontEnd }, platform: ""Uwp"") [FAIL]
GenerateAllPagesAndFeaturesAndCheckWithVBStyleAsync(projectType: ""SplitView"", framework: SupportedFramework { Name
= ""CodeBehind"", Type = FrontEnd }, platform: ""Uwp"") [FAIL]
GenerateAllPagesAndFeaturesAndCheckWithVBStyleAsync(projectType: ""TabbedNav"", framework: SupportedFramework { Name
= ""MVVMBasic"", Type = FrontEnd }, platform: ""Uwp"") [FAIL]
GenerateAllPagesAndFeaturesAndCheckWithVBStyleAsync(projectType: ""TabbedNav"", framework: SupportedFramework { Name
= ""MVVMLight"", Type = FrontEnd }, platform: ""Uwp"") [FAIL]
GenerateAllPagesAndFeaturesAndCheckWithVBStyleAsync(projectType: ""TabbedNav"", framework: SupportedFramework { Name
= ""CodeBehind"", Type = FrontEnd }, platform: ""Uwp"") [FAIL]
+ Process completed with exit code 18 and had 1 error(s) written to the error stream.
Find detailed information in the [build log files](https://uwpctdiags.blob.core.windows.net/buildlogs/dev.templates.tests.full_20190306.5_logs.zip)
",1.0,"Build dev.templates.tests.full_20190306.5 failed - ## Build dev.templates.tests.full_20190306.5
- **Build result:** `failed`
- **Build queued:** 3/6/2019 10:53:38 PM
- **Build duration:** 48.41 minutes
### Details
Build [dev.templates.tests.full_20190306.5](https://winappstudio.visualstudio.com/web/build.aspx?pcguid=a4ef43be-68ce-4195-a619-079b4d9834c2&builduri=vstfs%3a%2f%2f%2fBuild%2fBuild%2f27205) failed
+ xunit.console.exe : GenerateAllPagesAndFeaturesAndCheckWithSonarLintAsync(projectType: ""Blank"", framework:
SupportedFramework { Name = ""MVVMBasic"", Type = FrontEnd }, platform: ""Uwp"") [FAIL]
At pbatch:27 char:27
+
+ CategoryInfo : NotSpecified: ( GenerateAll...: ""Uwp"") [FAIL]:String) [], RemoteException
+ FullyQualifiedErrorId : NativeCommandError
+ PSComputerName : [localhost]
GenerateAllPagesAndFeaturesAndCheckWithSonarLintAsync(projectType: ""Blank"", framework: SupportedFramework { Name =
""MVVMLight"", Type = FrontEnd }, platform: ""Uwp"") [FAIL]
GenerateAllPagesAndFeaturesAndCheckWithSonarLintAsync(projectType: ""Blank"", framework: SupportedFramework { Name =
""CodeBehind"", Type = FrontEnd }, platform: ""Uwp"") [FAIL]
GenerateAllPagesAndFeaturesAndCheckWithSonarLintAsync(projectType: ""SplitView"", framework: SupportedFramework {
Name = ""MVVMBasic"", Type = FrontEnd }, platform: ""Uwp"") [FAIL]
GenerateAllPagesAndFeaturesAndCheckWithSonarLintAsync(projectType: ""SplitView"", framework: SupportedFramework {
Name = ""MVVMLight"", Type = FrontEnd }, platform: ""Uwp"") [FAIL]
GenerateAllPagesAndFeaturesAndCheckWithSonarLintAsync(projectType: ""SplitView"", framework: SupportedFramework {
Name = ""CodeBehind"", Type = FrontEnd }, platform: ""Uwp"") [FAIL]
GenerateAllPagesAndFeaturesAndCheckWithSonarLintAsync(projectType: ""TabbedNav"", framework: SupportedFramework {
Name = ""MVVMBasic"", Type = FrontEnd }, platform: ""Uwp"") [FAIL]
GenerateAllPagesAndFeaturesAndCheckWithSonarLintAsync(projectType: ""TabbedNav"", framework: SupportedFramework {
Name = ""MVVMLight"", Type = FrontEnd }, platform: ""Uwp"") [FAIL]
GenerateAllPagesAndFeaturesAndCheckWithSonarLintAsync(projectType: ""TabbedNav"", framework: SupportedFramework {
Name = ""CodeBehind"", Type = FrontEnd }, platform: ""Uwp"") [FAIL]
GenerateAllPagesAndFeaturesAndCheckWithVBStyleAsync(projectType: ""Blank"", framework: SupportedFramework { Name =
""MVVMBasic"", Type = FrontEnd }, platform: ""Uwp"") [FAIL]
GenerateAllPagesAndFeaturesAndCheckWithVBStyleAsync(projectType: ""Blank"", framework: SupportedFramework { Name =
""MVVMLight"", Type = FrontEnd }, platform: ""Uwp"") [FAIL]
GenerateAllPagesAndFeaturesAndCheckWithVBStyleAsync(projectType: ""Blank"", framework: SupportedFramework { Name =
""CodeBehind"", Type = FrontEnd }, platform: ""Uwp"") [FAIL]
GenerateAllPagesAndFeaturesAndCheckWithVBStyleAsync(projectType: ""SplitView"", framework: SupportedFramework { Name
= ""MVVMBasic"", Type = FrontEnd }, platform: ""Uwp"") [FAIL]
GenerateAllPagesAndFeaturesAndCheckWithVBStyleAsync(projectType: ""SplitView"", framework: SupportedFramework { Name
= ""MVVMLight"", Type = FrontEnd }, platform: ""Uwp"") [FAIL]
GenerateAllPagesAndFeaturesAndCheckWithVBStyleAsync(projectType: ""SplitView"", framework: SupportedFramework { Name
= ""CodeBehind"", Type = FrontEnd }, platform: ""Uwp"") [FAIL]
GenerateAllPagesAndFeaturesAndCheckWithVBStyleAsync(projectType: ""TabbedNav"", framework: SupportedFramework { Name
= ""MVVMBasic"", Type = FrontEnd }, platform: ""Uwp"") [FAIL]
GenerateAllPagesAndFeaturesAndCheckWithVBStyleAsync(projectType: ""TabbedNav"", framework: SupportedFramework { Name
= ""MVVMLight"", Type = FrontEnd }, platform: ""Uwp"") [FAIL]
GenerateAllPagesAndFeaturesAndCheckWithVBStyleAsync(projectType: ""TabbedNav"", framework: SupportedFramework { Name
= ""CodeBehind"", Type = FrontEnd }, platform: ""Uwp"") [FAIL]
+ Process completed with exit code 18 and had 1 error(s) written to the error stream.
Find detailed information in the [build log files](https://uwpctdiags.blob.core.windows.net/buildlogs/dev.templates.tests.full_20190306.5_logs.zip)
",0,build dev templates tests full failed build dev templates tests full build result failed build queued pm build duration minutes details build failed xunit console exe generateallpagesandfeaturesandcheckwithsonarlintasync projecttype blank framework supportedframework name mvvmbasic type frontend platform uwp at pbatch char categoryinfo notspecified generateall uwp string remoteexception fullyqualifiederrorid nativecommanderror pscomputername generateallpagesandfeaturesandcheckwithsonarlintasync projecttype blank framework supportedframework name mvvmlight type frontend platform uwp generateallpagesandfeaturesandcheckwithsonarlintasync projecttype blank framework supportedframework name codebehind type frontend platform uwp generateallpagesandfeaturesandcheckwithsonarlintasync projecttype splitview framework supportedframework name mvvmbasic type frontend platform uwp generateallpagesandfeaturesandcheckwithsonarlintasync projecttype splitview framework supportedframework name mvvmlight type frontend platform uwp generateallpagesandfeaturesandcheckwithsonarlintasync projecttype splitview framework supportedframework name codebehind type frontend platform uwp generateallpagesandfeaturesandcheckwithsonarlintasync projecttype tabbednav framework supportedframework name mvvmbasic type frontend platform uwp generateallpagesandfeaturesandcheckwithsonarlintasync projecttype tabbednav framework supportedframework name mvvmlight type frontend platform uwp generateallpagesandfeaturesandcheckwithsonarlintasync projecttype tabbednav framework supportedframework name codebehind type frontend platform uwp generateallpagesandfeaturesandcheckwithvbstyleasync projecttype blank framework supportedframework name mvvmbasic type frontend platform uwp generateallpagesandfeaturesandcheckwithvbstyleasync projecttype blank framework supportedframework name mvvmlight type frontend platform uwp generateallpagesandfeaturesandcheckwithvbstyleasync projecttype blank framework supportedframework name codebehind type frontend platform uwp generateallpagesandfeaturesandcheckwithvbstyleasync projecttype splitview framework supportedframework name mvvmbasic type frontend platform uwp generateallpagesandfeaturesandcheckwithvbstyleasync projecttype splitview framework supportedframework name mvvmlight type frontend platform uwp generateallpagesandfeaturesandcheckwithvbstyleasync projecttype splitview framework supportedframework name codebehind type frontend platform uwp generateallpagesandfeaturesandcheckwithvbstyleasync projecttype tabbednav framework supportedframework name mvvmbasic type frontend platform uwp generateallpagesandfeaturesandcheckwithvbstyleasync projecttype tabbednav framework supportedframework name mvvmlight type frontend platform uwp generateallpagesandfeaturesandcheckwithvbstyleasync projecttype tabbednav framework supportedframework name codebehind type frontend platform uwp process completed with exit code and had error s written to the error stream find detailed information in the ,0
82966,10316502744.0,IssuesEvent,2019-08-30 10:09:04,OpenEnergyPlatform/oedialect,https://api.github.com/repos/OpenEnergyPlatform/oedialect,closed,Missing documentation for developpers,Time-L Urgency-L documentation enhancement requirement_specification specification_sheet,"Are you planning some kind of documentation?
A short description and some examples would be great. I started in #8.",1.0,"Missing documentation for developpers - Are you planning some kind of documentation?
A short description and some examples would be great. I started in #8.",0,missing documentation for developpers are you planning some kind of documentation a short description and some examples would be great i started in ,0
68808,13183854873.0,IssuesEvent,2020-08-12 18:17:34,robocorp/robotframework-lsp,https://api.github.com/repos/robocorp/robotframework-lsp,closed,Packages listed from the cloud should be sorted.,enhancement robocode,The last one selected for a given directory should be at the top and others should be sorted by the name.,1.0,Packages listed from the cloud should be sorted. - The last one selected for a given directory should be at the top and others should be sorted by the name.,0,packages listed from the cloud should be sorted the last one selected for a given directory should be at the top and others should be sorted by the name ,0
1615,23311008928.0,IssuesEvent,2022-08-08 08:16:18,codbex/codbex-kronos,https://api.github.com/repos/codbex/codbex-kronos,opened,[Build] Command execution error when building kronos project on windows,bug effort-medium supportability,"[ERROR] Failed to execute goal org.codehaus.mojo:exec-maven-plugin:3.1.0:exec (chmod) on project kronos-resources-neo-sdk: Command execution failed.: Cannot run program ""chmod"" (in directory ""C:\Users\Ayhan\IdeaProjects\codbex-kronos\
resources\resources-neo-sdk""): CreateProcess error=2, The system cannot find the file specified
Related to this plugin https://github.com/codbex/codbex-kronos/blob/main/resources/resources-neo-sdk/pom.xml#L97-L118",True,"[Build] Command execution error when building kronos project on windows - [ERROR] Failed to execute goal org.codehaus.mojo:exec-maven-plugin:3.1.0:exec (chmod) on project kronos-resources-neo-sdk: Command execution failed.: Cannot run program ""chmod"" (in directory ""C:\Users\Ayhan\IdeaProjects\codbex-kronos\
resources\resources-neo-sdk""): CreateProcess error=2, The system cannot find the file specified
Related to this plugin https://github.com/codbex/codbex-kronos/blob/main/resources/resources-neo-sdk/pom.xml#L97-L118",1, command execution error when building kronos project on windows failed to execute goal org codehaus mojo exec maven plugin exec chmod on project kronos resources neo sdk command execution failed cannot run program chmod in directory c users ayhan ideaprojects codbex kronos resources resources neo sdk createprocess error the system cannot find the file specified related to this plugin ,1
311679,9537542658.0,IssuesEvent,2019-04-30 12:48:23,HGustavs/LenaSYS,https://api.github.com/repos/HGustavs/LenaSYS,closed,Email vulnerability allows an attacker to retrieve all emails associated with a given course,GruppB2019 activeGruppB2019 bug highPriority vulnerability,"The following code runs without checking if the user has a valid login:
https://github.com/HGustavs/LenaSYS/blob/0b3972bc464df6240a4088ec74e41fcd70bad63f/DuggaSys/resultedservice.php#L72-L104
This allows a malicious actor to retrieve all emails for a course (students and teachers alike) using only publicly known information, by simply sending a POST request with the correct parameters.
This may be a breach of GDPR, as email addresses may be used to identify individuals.
This is mitigated by the ability to block direct access to this page via webserver configuration, but that should not be relied upon.",1.0,"Email vulnerability allows an attacker to retrieve all emails associated with a given course - The following code runs without checking if the user has a valid login:
https://github.com/HGustavs/LenaSYS/blob/0b3972bc464df6240a4088ec74e41fcd70bad63f/DuggaSys/resultedservice.php#L72-L104
This allows a malicious actor to retrieve all emails for a course (students and teachers alike) using only publicly known information, by simply sending a POST request with the correct parameters.
This may be a breach of GDPR, as email addresses may be used to identify individuals.
This is mitigated by the ability to block direct access to this page via webserver configuration, but that should not be relied upon.",0,email vulnerability allows an attacker to retrieve all emails associated with a given course the following code runs without checking if the user has a valid login this allows a malicious actor to retrieve all emails for a course students and teachers alike using only publicly known information by simply sending a post request with the correct parameters this may be a breach of gdpr as email addresses may be used to identify individuals this is mitigated by the ability to block direct access to this page via webserver configuration but that should not be relied upon ,0
79879,29496246576.0,IssuesEvent,2023-06-02 17:14:43,department-of-veterans-affairs/va.gov-team,https://api.github.com/repos/department-of-veterans-affairs/va.gov-team,opened,Bank account prompt for disability claim is missing context,526ez-Defects,"## Issue Description
When a user reaches step 4 of the 21-526EZ form, they're asked to share their bank account information. This is presumably to set up a direct deposit for benefit claims payments but there is zero context provided on what the information will be used for.
Ideally there will be some language here that explains why we are asking for this sensitive information.

### Information needed to assess the severity of the issue
- Users Impacted: All of them
- Issue Impact: Potential bounce if the user is discouraged from sharing that information without knowing what it is used for.
- Workaround:
- Legal Requirement:
- Loss of Service:
- Permanent Impact:
- Vulnerability:
### Information that aids in the research and troubleshooting of the issue
- Date and Time of Issue:
- Form/Page of Issue: http://localhost:3001/disability/file-disability-claim-form-21-526ez/payment-information
- Error Message:
- How to reproduce:
- Unique IDs:
- Application ID:
- Hardware Issue Observed On:
- Browser type / version:
- Case#, if available:
- Reporter name in va.gov:
### Any Additional Information:
",1.0,"Bank account prompt for disability claim is missing context - ## Issue Description
When a user reaches step 4 of the 21-526EZ form, they're asked to share their bank account information. This is presumably to set up a direct deposit for benefit claims payments but there is zero context provided on what the information will be used for.
Ideally there will be some language here that explains why we are asking for this sensitive information.

### Information needed to assess the severity of the issue
- Users Impacted: All of them
- Issue Impact: Potential bounce if the user is discouraged from sharing that information without knowing what it is used for.
- Workaround:
- Legal Requirement:
- Loss of Service:
- Permanent Impact:
- Vulnerability:
### Information that aids in the research and troubleshooting of the issue
- Date and Time of Issue:
- Form/Page of Issue: http://localhost:3001/disability/file-disability-claim-form-21-526ez/payment-information
- Error Message:
- How to reproduce:
- Unique IDs:
- Application ID:
- Hardware Issue Observed On:
- Browser type / version:
- Case#, if available:
- Reporter name in va.gov:
### Any Additional Information:
",0,bank account prompt for disability claim is missing context issue description when a user reaches step of the form they re asked to share their bank account information this is presumably to set up a direct deposit for benefit claims payments but there is zero context provided on what the information will be used for ideally there will be some language here that explains why we are asking for this sensitive information information needed to assess the severity of the issue users impacted all of them issue impact potential bounce if the user is discouraged from sharing that information without knowing what it is used for workaround legal requirement loss of service permanent impact vulnerability information that aids in the research and troubleshooting of the issue date and time of issue form page of issue error message how to reproduce unique ids application id hardware issue observed on browser type version case if available reporter name in va gov any additional information examples where does the user think they are in the process does it impact a specific flow or population is this issue seen consistently or has it just started when did we started seeing this issue what ux ui components are the users interacting with before the error is produced ,0
136739,19916386950.0,IssuesEvent,2022-01-25 23:20:58,AlaskaAirlines/AuroDesignTokens,https://api.github.com/repos/AlaskaAirlines/AuroDesignTokens,closed,Auro design tokens: Oneworld/MP Tier tokens,Type: Feature Type: Design design tokens,"We have an upcoming 100k tier being added to the Mileage Plan program.
Sometimes I see these tiers with different colors. Should we add design tokens for these tiers so its easier to implement and is consistent?
@vidalmenAS I'd love to discuss this one with you.
Million miler uses #767676
https://www.alaskaair.com/content/mileage-plan/membership-benefits?lid=mileageplan:mileage-plan-overview:membership-benefits
One world does use colors to denote each tier that we should create tokens for.
https://cdn4.loyaltylobby.com/wp-content/uploads/2014/12/Oneworld-Enhanced-Sapphire-Benefits-Table.png",2.0,"Auro design tokens: Oneworld/MP Tier tokens - We have an upcoming 100k tier being added to the Mileage Plan program.
Sometimes I see these tiers with different colors. Should we add design tokens for these tiers so its easier to implement and is consistent?
@vidalmenAS I'd love to discuss this one with you.
Million miler uses #767676
https://www.alaskaair.com/content/mileage-plan/membership-benefits?lid=mileageplan:mileage-plan-overview:membership-benefits
One world does use colors to denote each tier that we should create tokens for.
https://cdn4.loyaltylobby.com/wp-content/uploads/2014/12/Oneworld-Enhanced-Sapphire-Benefits-Table.png",0,auro design tokens oneworld mp tier tokens we have an upcoming tier being added to the mileage plan program sometimes i see these tiers with different colors should we add design tokens for these tiers so its easier to implement and is consistent vidalmenas i d love to discuss this one with you million miler uses one world does use colors to denote each tier that we should create tokens for ,0
353,6084763269.0,IssuesEvent,2017-06-17 07:36:02,javaee-security-spec/soteria,https://api.github.com/repos/javaee-security-spec/soteria,closed,Soteria & Hammock,portability,"Hi, I'm interested in making sure Soteria works with Hammock. Looking at your build output, looks like I just need the appropriate JARs on the classpath. Anything else you can think of to make it work?",True,"Soteria & Hammock - Hi, I'm interested in making sure Soteria works with Hammock. Looking at your build output, looks like I just need the appropriate JARs on the classpath. Anything else you can think of to make it work?",1,soteria hammock hi i m interested in making sure soteria works with hammock looking at your build output looks like i just need the appropriate jars on the classpath anything else you can think of to make it work ,1
8156,7255035084.0,IssuesEvent,2018-02-16 13:30:42,raiden-network/raiden,https://api.github.com/repos/raiden-network/raiden,closed,Adapt linux and macOS deployment infrastructure for py3,infrastructure sprint_candidate,"## Problem Definition
The deployment tools for linux and macOS haven't been tested under py3 and updated `ethereum` library.
## Solution
Test deployment tools
## Tasklist
- [x] Test linux deployment
- [x] Test macOS deployment
",1.0,"Adapt linux and macOS deployment infrastructure for py3 - ## Problem Definition
The deployment tools for linux and macOS haven't been tested under py3 and updated `ethereum` library.
## Solution
Test deployment tools
## Tasklist
- [x] Test linux deployment
- [x] Test macOS deployment
",0,adapt linux and macos deployment infrastructure for problem definition the deployment tools for linux and macos haven t been tested under and updated ethereum library solution test deployment tools tasklist test linux deployment test macos deployment ,0
147660,11800489132.0,IssuesEvent,2020-03-18 17:38:11,Azure/sap-hana,https://api.github.com/repos/Azure/sap-hana,opened,Capability to configure OS clustering software on SLES,Ansible Test enhancement,"## Problem Statement
The current V2 codebase only supports single node HANA instances, but customers would like to be able to provision clustered HANA systems that support automated failover when a node in the cluster fails. In order to support this, the OS must be configured as part of a corosync cluster.
## Enhancement
Ensure that the codebase can support provisioning of a HANA cluster on the SLES platform.
This will be demonstrated by introducing a new V2 templated input JSON file `clustered_hana` that demonstrates the capability to:
- [ ] Configure the HA clustering packages on both VMs in the cluster.
- [ ] Run some ansible commands/playbooks that demonstrate the cluster failure behaviour.
## Checklist
- [ ] Usage documentation updated as necessary
- [ ] Architecture documentation updated as necessary
",1.0,"Capability to configure OS clustering software on SLES - ## Problem Statement
The current V2 codebase only supports single node HANA instances, but customers would like to be able to provision clustered HANA systems that support automated failover when a node in the cluster fails. In order to support this, the OS must be configured as part of a corosync cluster.
## Enhancement
Ensure that the codebase can support provisioning of a HANA cluster on the SLES platform.
This will be demonstrated by introducing a new V2 templated input JSON file `clustered_hana` that demonstrates the capability to:
- [ ] Configure the HA clustering packages on both VMs in the cluster.
- [ ] Run some ansible commands/playbooks that demonstrate the cluster failure behaviour.
## Checklist
- [ ] Usage documentation updated as necessary
- [ ] Architecture documentation updated as necessary
",0,capability to configure os clustering software on sles problem statement the current codebase only supports single node hana instances but customers would like to be able to provision clustered hana systems that support automated failover when a node in the cluster fails in order to support this the os must be configured as part of a corosync cluster enhancement ensure that the codebase can support provisioning of a hana cluster on the sles platform this will be demonstrated by introducing a new templated input json file clustered hana that demonstrates the capability to configure the ha clustering packages on both vms in the cluster run some ansible commands playbooks that demonstrate the cluster failure behaviour checklist usage documentation updated as necessary architecture documentation updated as necessary ,0
704967,24216451886.0,IssuesEvent,2022-09-26 07:14:18,zephyrproject-rtos/zephyr,https://api.github.com/repos/zephyrproject-rtos/zephyr,closed,tests: debug: test case subsys/debug/coredump failed on acrn_ehl_crb on branch v2.7,bug priority: low area: Debugging area: ACRN,"Describe the bug
The subsys/debug/coredump testcase failed on acrn_ehl_crb board. The twister will catch the console output to determine
whether it passes or not. It seems like that the front part of acrn console output would be missing.
Please also mention any information which could help others to understand
the problem you're facing:
What target platform are you using?
acrn_ehl_crb
What have you tried to diagnose or workaround this issue?
This test case failure is likely to the front part of the core dump message cannot be output by acrn console.
To Reproduce
Steps to reproduce the behavior:
source zephyr-env.sh
twister -p acrn_ehl_crb --device-testing --device-serial-pty=""/opt/remotehw/acrn-test-pty.exp,ehlsku11"" --west-flash=""/opt/remotehw/remotehw-x86-acrn.sh,ehlsku11"" -vv -T tests/subsys/debug/coredump_backend
See error
Expected behavior
The test case should be passed.
Impact
No serious impact.
Logs and console output
DEBUG - DEVICE: ACRN:\>vm_console 0
DEBUG - DEVICE:
DEBUG - DEVICE: ----- Entering VM 0 Shell -----
DEBUG - DEVICE: 0000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:460000000000000071441000000000000100000000000000c820120000000000
DEBUG - DEVICE: E: #CD:701d120000000000103f100000000000f803000000000000f080100000000000
DEBUG - DEVICE: E: #CD:460000000000000071441000000000000100000000000000c820120000000000
DEBUG - DEVICE: E: #CD:b01d120000000000103f100000000000f803000000000000f080100000000000
DEBUG - DEVICE: E: #CD:c8201200000000004600000000000000e01d1200000000006b2d100000000000
DEBUG - DEVICE: E: #CD:0e000000000000000a000000000000000e000000000000004100100000000000
DEBUG - DEVICE: E: #CD:001e120000000000592c1000000000000100000000000000f81e120000000000
DEBUG - DEVICE: E: #CD:a01e1200000000003211100000000000b01e120000000000cc85100000000000
DEBUG - DEVICE: E: #CD:da851000000000000a000000000000002e9b1000000000003700000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000046000000000000007144100000000000
DEBUG - DEVICE: E: #CD:01000000000000002004110000000000a01e120000000000103f100000000000
DEBUG - DEVICE: E: #CD:01000000000000000202000000000000cd85100000000000f81e120000000000
DEBUG - DEVICE: E: #CD:e01e120000000000b701100000000000f01e120000000000b70110000f000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:c01f1200000000001401100000000000d01f1200000000001000000030000000
DEBUG - DEVICE: E: #CD:d01f120000000000101f120000000000e01f120000000000c885100000000000
DEBUG - DEVICE: E: #CD:02000000000000003008000000000000b0041100000000000000000000000000
DEBUG - DEVICE: E: #CD:4600000000000000714410000000000046000000000000007144100000000000
DEBUG - DEVICE: E: #CD:4600000000000000714410000000000001000000000000004702000000000000
DEBUG - DEVICE: E: #CD:a01f120000000000786a10000000000002000000000000003000000000000000
DEBUG - DEVICE: E: #CD:c01f1200000000003e2110000000000002000000000000000000000000000000
DEBUG - DEVICE: E: #CD:d01f1200000000001500100000000000e01f120000000000d237100000000000
DEBUG - DEVICE: E: #CD:f01f120000000000df0210000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:END#
DEBUG - DEVICE: E: Halting system
DEBUG - Timed out while monitoring serial output on acrn_ehl_crb
DEBUG - Process /opt/remotehw/acrn-test-pty.exp,ehlsku11 terminated outs: None errs None
DEBUG - run status: acrn_ehl_crb/tests/subsys/debug/coredump/coredump.logging_backend timeout
INFO - 1/1 acrn_ehl_crb tests/subsys/debug/coredump/coredump.logging_backend FAILED Timeout (device 78.425s)
ERROR - see: /home/emai/work/zephyrproject/zephyr/twister-out/acrn_ehl_crb/tests/subsys/debug/coredump/coredump.logging_backend/handler.log
Environment (please complete the following information):
OS: Linux
Toolchain: Zephyr SDK, 0.14.1
Commit SHA or Version used
Additional context
The might not be an issue of zephyr.",1.0,"tests: debug: test case subsys/debug/coredump failed on acrn_ehl_crb on branch v2.7 - Describe the bug
The subsys/debug/coredump testcase failed on acrn_ehl_crb board. The twister will catch the console output to determine
whether it passes or not. It seems like that the front part of acrn console output would be missing.
Please also mention any information which could help others to understand
the problem you're facing:
What target platform are you using?
acrn_ehl_crb
What have you tried to diagnose or workaround this issue?
This test case failure is likely to the front part of the core dump message cannot be output by acrn console.
To Reproduce
Steps to reproduce the behavior:
source zephyr-env.sh
twister -p acrn_ehl_crb --device-testing --device-serial-pty=""/opt/remotehw/acrn-test-pty.exp,ehlsku11"" --west-flash=""/opt/remotehw/remotehw-x86-acrn.sh,ehlsku11"" -vv -T tests/subsys/debug/coredump_backend
See error
Expected behavior
The test case should be passed.
Impact
No serious impact.
Logs and console output
DEBUG - DEVICE: ACRN:\>vm_console 0
DEBUG - DEVICE:
DEBUG - DEVICE: ----- Entering VM 0 Shell -----
DEBUG - DEVICE: 0000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:460000000000000071441000000000000100000000000000c820120000000000
DEBUG - DEVICE: E: #CD:701d120000000000103f100000000000f803000000000000f080100000000000
DEBUG - DEVICE: E: #CD:460000000000000071441000000000000100000000000000c820120000000000
DEBUG - DEVICE: E: #CD:b01d120000000000103f100000000000f803000000000000f080100000000000
DEBUG - DEVICE: E: #CD:c8201200000000004600000000000000e01d1200000000006b2d100000000000
DEBUG - DEVICE: E: #CD:0e000000000000000a000000000000000e000000000000004100100000000000
DEBUG - DEVICE: E: #CD:001e120000000000592c1000000000000100000000000000f81e120000000000
DEBUG - DEVICE: E: #CD:a01e1200000000003211100000000000b01e120000000000cc85100000000000
DEBUG - DEVICE: E: #CD:da851000000000000a000000000000002e9b1000000000003700000000000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000046000000000000007144100000000000
DEBUG - DEVICE: E: #CD:01000000000000002004110000000000a01e120000000000103f100000000000
DEBUG - DEVICE: E: #CD:01000000000000000202000000000000cd85100000000000f81e120000000000
DEBUG - DEVICE: E: #CD:e01e120000000000b701100000000000f01e120000000000b70110000f000000
DEBUG - DEVICE: E: #CD:0000000000000000000000000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:c01f1200000000001401100000000000d01f1200000000001000000030000000
DEBUG - DEVICE: E: #CD:d01f120000000000101f120000000000e01f120000000000c885100000000000
DEBUG - DEVICE: E: #CD:02000000000000003008000000000000b0041100000000000000000000000000
DEBUG - DEVICE: E: #CD:4600000000000000714410000000000046000000000000007144100000000000
DEBUG - DEVICE: E: #CD:4600000000000000714410000000000001000000000000004702000000000000
DEBUG - DEVICE: E: #CD:a01f120000000000786a10000000000002000000000000003000000000000000
DEBUG - DEVICE: E: #CD:c01f1200000000003e2110000000000002000000000000000000000000000000
DEBUG - DEVICE: E: #CD:d01f1200000000001500100000000000e01f120000000000d237100000000000
DEBUG - DEVICE: E: #CD:f01f120000000000df0210000000000000000000000000000000000000000000
DEBUG - DEVICE: E: #CD:END#
DEBUG - DEVICE: E: Halting system
DEBUG - Timed out while monitoring serial output on acrn_ehl_crb
DEBUG - Process /opt/remotehw/acrn-test-pty.exp,ehlsku11 terminated outs: None errs None
DEBUG - run status: acrn_ehl_crb/tests/subsys/debug/coredump/coredump.logging_backend timeout
INFO - 1/1 acrn_ehl_crb tests/subsys/debug/coredump/coredump.logging_backend FAILED Timeout (device 78.425s)
ERROR - see: /home/emai/work/zephyrproject/zephyr/twister-out/acrn_ehl_crb/tests/subsys/debug/coredump/coredump.logging_backend/handler.log
Environment (please complete the following information):
OS: Linux
Toolchain: Zephyr SDK, 0.14.1
Commit SHA or Version used
Additional context
The might not be an issue of zephyr.",0,tests debug test case subsys debug coredump failed on acrn ehl crb on branch describe the bug the subsys debug coredump testcase failed on acrn ehl crb board the twister will catch the console output to determine whether it passes or not it seems like that the front part of acrn console output would be missing please also mention any information which could help others to understand the problem you re facing what target platform are you using acrn ehl crb what have you tried to diagnose or workaround this issue this test case failure is likely to the front part of the core dump message cannot be output by acrn console to reproduce steps to reproduce the behavior source zephyr env sh twister p acrn ehl crb device testing device serial pty opt remotehw acrn test pty exp west flash opt remotehw remotehw acrn sh vv t tests subsys debug coredump backend see error expected behavior the test case should be passed impact no serious impact logs and console output debug device acrn vm console debug device debug device entering vm shell debug device debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd debug device e cd end debug device e halting system debug timed out while monitoring serial output on acrn ehl crb debug process opt remotehw acrn test pty exp terminated outs none errs none debug run status acrn ehl crb tests subsys debug coredump coredump logging backend timeout info acrn ehl crb tests subsys debug coredump coredump logging backend failed timeout device error see home emai work zephyrproject zephyr twister out acrn ehl crb tests subsys debug coredump coredump logging backend handler log environment please complete the following information os linux toolchain zephyr sdk commit sha or version used additional context the might not be an issue of zephyr ,0
170263,13181280803.0,IssuesEvent,2020-08-12 14:06:01,Devdevdavid/Pascal-Project,https://api.github.com/repos/Devdevdavid/Pascal-Project,closed,[Interface température] Ajouter un mode de comparaison par delta,Amélioration À tester,"Il serait intéressant de pouvoir comparer la valeur des deux sondes de température entre elles.
Ceci serait notamment utile pour détecter s'il est viable d'ouvrir la fenêtre lorsqu'il fait froid/chaud.",1.0,"[Interface température] Ajouter un mode de comparaison par delta - Il serait intéressant de pouvoir comparer la valeur des deux sondes de température entre elles.
Ceci serait notamment utile pour détecter s'il est viable d'ouvrir la fenêtre lorsqu'il fait froid/chaud.",0, ajouter un mode de comparaison par delta il serait intéressant de pouvoir comparer la valeur des deux sondes de température entre elles ceci serait notamment utile pour détecter s il est viable d ouvrir la fenêtre lorsqu il fait froid chaud ,0
115316,14712574877.0,IssuesEvent,2021-01-05 09:08:10,knurling-rs/defmt,https://api.github.com/repos/knurling-rs/defmt,closed,feature request: treat {} the same as {:?},breaking change difficulty: medium priority: medium status: needs design type: enhancement,"Defmt doesn't support `{}`, it requires you to write `{:?}`. This is the biggest annoyance when porting code from `std::fmt` to `defmt`. It would be great if it treated both the same.
If a codebase wants to keep compatibility with both `defmt` and `std::fmt` it's even worse: a find+replace is not enough, since `{}` and `{:?}` mean different things in `std::fmt` (one uses `Display`, the other uses `Debug`), so this would change the behavior of the `std` builds.",1.0,"feature request: treat {} the same as {:?} - Defmt doesn't support `{}`, it requires you to write `{:?}`. This is the biggest annoyance when porting code from `std::fmt` to `defmt`. It would be great if it treated both the same.
If a codebase wants to keep compatibility with both `defmt` and `std::fmt` it's even worse: a find+replace is not enough, since `{}` and `{:?}` mean different things in `std::fmt` (one uses `Display`, the other uses `Debug`), so this would change the behavior of the `std` builds.",0,feature request treat the same as defmt doesn t support it requires you to write this is the biggest annoyance when porting code from std fmt to defmt it would be great if it treated both the same if a codebase wants to keep compatibility with both defmt and std fmt it s even worse a find replace is not enough since and mean different things in std fmt one uses display the other uses debug so this would change the behavior of the std builds ,0
169,3876232553.0,IssuesEvent,2016-04-12 06:52:11,wahern/cqueues,https://api.github.com/repos/wahern/cqueues,opened,Building with multiple include dirs,packaging/portability,If I have lua at `/foo/lua.h` and openssl at `/bar/openssl/openssl.h` what are the make arguments I should pass?,True,Building with multiple include dirs - If I have lua at `/foo/lua.h` and openssl at `/bar/openssl/openssl.h` what are the make arguments I should pass?,1,building with multiple include dirs if i have lua at foo lua h and openssl at bar openssl openssl h what are the make arguments i should pass ,1
89263,10593557241.0,IssuesEvent,2019-10-09 15:04:34,dgraph-io/badger,https://api.github.com/repos/dgraph-io/badger,closed,Backup/Restore v1.5.5 -> v2.0.0 fails with ,area/documentation kind/enhancement priority/P3 status/accepted,"### What version of Go are you using (`go version`)?
$ go version
go version go1.13 linux/amd64
### What version of Badger are you using?
v1.5.5, v1.6.0, v2.0.0
### Does this issue reproduce with the latest master?
Yes (ish)
### What are the hardware specifications of the machine (RAM, OS, Disk)?
x86-64
### What did you do?
Trying to use the instructions [here](https://github.com/dgraph-io/badger/blob/master/README.md#i-see-manifest-has-unsupported-version-x-we-support-y-error) to convert a pre v2 database to v2. I get an error about the wiretype. I suspect this may be related to 4e5cbcc8574b8258c0146d4d01539b6d48212f4a as mentioned in #966.
in `badger/badger` dir:
```
$ git checkout v1.5.5
$ go build -o /tmp/badger-v1.5.5 .
$ git checkout v1.6.0
$ go build -o /tmp/badger-v1.6.0 .
$ git checkout v2.0.0-rc3
$ go build -o /tmp/badger-v2.0.0.
```
Data is in `/tmp/old/`
```
$ /tmp/badger-v1.5.5 -dir /tmp/old/ -f /tmp/backup
Listening for /debug HTTP requests at port: 8080
Port busy. Trying another one...
Listening for /debug HTTP requests at port: 8081
2019/10/04 02:19:32 Replaying from value pointer: {Fid:0 Len:44 Offset:221201341}
2019/10/04 02:19:32 Iterating file id: 0
2019/10/04 02:19:32 Iteration took: 11.493µs
$ ls -lh /tmp/backup
-rw-r--r-- 1 jmansfield 501 206M Oct 4 02:19 /tmp/backup
$ mkdir /tmp/new/
$ /tmp/badger-v2.0.0 restore --dir /tmp/new/ -f /tmp/backup
Listening for /debug HTTP requests at port: 8080
Port busy. Trying another one...
Listening for /debug HTTP requests at port: 8081
badger 2019/10/04 02:21:49 INFO: All 0 tables opened in 0s
badger 2019/10/04 02:21:49 INFO: Got compaction priority: {level:0 score:1.73 dropPrefix:[]}
Error: proto: illegal wireType 7
Usage:
badger restore [flags]
Flags:
-f, --backup-file string File to restore from (default ""badger.bak"")
-h, --help help for restore
-w, --max-pending-writes int Max number of pending writes at any time while restore (default 256)
Global Flags:
--dir string Directory where the LSM tree files are located. (required)
--vlog-dir string Directory where the value log files are located, if different from --dir
proto: illegal wireType 7
$ rm /tmp/backup
$ rm /tmp/new/*
$ /tmp/badger-v1.6.0 backup --dir /tmp/old/ -f /tmp/backup
Listening for /debug HTTP requests at port: 8080
Port busy. Trying another one...
Listening for /debug HTTP requests at port: 8081
badger 2019/10/04 02:23:41 INFO: All 1 tables opened in 116ms
badger 2019/10/04 02:23:41 INFO: Replaying file id: 0 at offset: 221201385
badger 2019/10/04 02:23:41 INFO: Replay took: 13.803µs
badger 2019/10/04 02:23:41 INFO: DB.Backup Created batch of size: 4.2 MB in 3.748252ms.
badger 2019/10/04 02:23:41 INFO: DB.Backup Created batch of size: 4.2 MB in 2.725004ms.
badger 2019/10/04 02:23:41 INFO: DB.Backup Created batch of size: 4.2 MB in 4.651587ms.
badger 2019/10/04 02:23:41 INFO: DB.Backup Created batch of size: 4.3 MB in 10.003926ms.
badger 2019/10/04 02:23:41 INFO: DB.Backup Created batch of size: 4.3 MB in 10.617858ms.
badger 2019/10/04 02:23:41 INFO: DB.Backup Created batch of size: 4.3 MB in 9.836249ms.
badger 2019/10/04 02:23:41 INFO: DB.Backup Created batch of size: 4.3 MB in 7.726215ms.
badger 2019/10/04 02:23:41 INFO: DB.Backup Created batch of size: 4.3 MB in 8.288252ms.
badger 2019/10/04 02:23:41 INFO: DB.Backup Created batch of size: 4.3 MB in 8.235459ms.
badger 2019/10/04 02:23:41 INFO: DB.Backup Created batch of size: 4.3 MB in 6.29205ms.
badger 2019/10/04 02:23:41 INFO: DB.Backup Created batch of size: 4.2 MB in 3.652964ms.
badger 2019/10/04 02:23:41 INFO: DB.Backup Created batch of size: 4.2 MB in 3.849211ms.
badger 2019/10/04 02:23:41 INFO: DB.Backup Created batch of size: 4.2 MB in 4.060754ms.
badger 2019/10/04 02:23:41 INFO: DB.Backup Created batch of size: 4.2 MB in 4.05686ms.
badger 2019/10/04 02:23:41 INFO: DB.Backup Created batch of size: 4.2 MB in 4.305175ms.
badger 2019/10/04 02:23:41 INFO: DB.Backup Created batch of size: 4.2 MB in 45.192963ms.
badger 2019/10/04 02:23:41 INFO: DB.Backup Created batch of size: 25 MB in 15.394368ms.
badger 2019/10/04 02:23:41 INFO: DB.Backup Created batch of size: 21 MB in 12.44265ms.
badger 2019/10/04 02:23:41 INFO: DB.Backup Created batch of size: 17 MB in 12.794977ms.
badger 2019/10/04 02:23:41 INFO: DB.Backup Created batch of size: 13 MB in 47.555567ms.
badger 2019/10/04 02:23:41 INFO: DB.Backup Created batch of size: 38 MB in 22.24327ms.
badger 2019/10/04 02:23:41 INFO: DB.Backup Created batch of size: 4.2 MB in 3.004712ms.
badger 2019/10/04 02:23:41 INFO: DB.Backup Created batch of size: 4.2 MB in 3.239008ms.
badger 2019/10/04 02:23:41 INFO: DB.Backup Created batch of size: 4.2 MB in 2.023829ms.
badger 2019/10/04 02:23:41 INFO: DB.Backup Created batch of size: 4.2 MB in 1.900745ms.
badger 2019/10/04 02:23:41 INFO: DB.Backup Created batch of size: 4.2 MB in 43.769075ms.
badger 2019/10/04 02:23:41 INFO: DB.Backup Created batch of size: 13 MB in 9.170276ms.
badger 2019/10/04 02:23:41 INFO: DB.Backup Sent 427900 keys
badger 2019/10/04 02:23:42 INFO: Got compaction priority: {level:0 score:1.73 dropPrefix:[]}
$ ls -lh /tmp/backup
-rw-r--r-- 1 jmansfield 501 206M Oct 4 02:23 /tmp/backup
$ /tmp/badger-v2.0.0 restore --dir /tmp/new/ -f /tmp/backup
Listening for /debug HTTP requests at port: 8080
Port busy. Trying another one...
Listening for /debug HTTP requests at port: 8081
badger 2019/10/04 02:24:16 INFO: All 0 tables opened in 0s
badger 2019/10/04 02:24:18 DEBUG: Storing value log head: {Fid:0 Len:1918 Offset:215714908}
badger 2019/10/04 02:24:18 INFO: Got compaction priority: {level:0 score:1.73 dropPrefix:[]}
badger 2019/10/04 02:24:18 INFO: Running for level: 0
badger 2019/10/04 02:24:18 DEBUG: LOG Compact. Added 427901 keys. Skipped 0 keys. Iteration took: 227.260848ms
badger 2019/10/04 02:24:18 DEBUG: Discard stats: map[]
badger 2019/10/04 02:24:18 INFO: LOG Compact 0->1, del 1 tables, add 1 tables, took 350.923977ms
badger 2019/10/04 02:24:18 INFO: Compaction for level: 0 DONE
badger 2019/10/04 02:24:18 INFO: Force compaction on level 0 done
$ ls -lh /tmp/new/
total 232M
-rw-r--r-- 1 jmansfield 501 206M Oct 4 02:24 000000.vlog
-rw-r--r-- 1 jmansfield 501 26M Oct 4 02:24 000002.sst
-rw-r--r-- 1 jmansfield 501 48 Oct 4 02:24 MANIFEST
```
### What did you expect to see?
The data get converted using 1.5.5 to v2.0.0
### What did you see instead?
An error about wireType unless the `backup` operation was performed with v1.6.0
I consider this to be an issue with documentation, not the tool. The conversion instructions in the documentation specify v1.5.5. That other doesn't work or doesn't _always_ work.",1.0,"Backup/Restore v1.5.5 -> v2.0.0 fails with - ### What version of Go are you using (`go version`)?
$ go version
go version go1.13 linux/amd64
### What version of Badger are you using?
v1.5.5, v1.6.0, v2.0.0
### Does this issue reproduce with the latest master?
Yes (ish)
### What are the hardware specifications of the machine (RAM, OS, Disk)?
x86-64
### What did you do?
Trying to use the instructions [here](https://github.com/dgraph-io/badger/blob/master/README.md#i-see-manifest-has-unsupported-version-x-we-support-y-error) to convert a pre v2 database to v2. I get an error about the wiretype. I suspect this may be related to 4e5cbcc8574b8258c0146d4d01539b6d48212f4a as mentioned in #966.
in `badger/badger` dir:
```
$ git checkout v1.5.5
$ go build -o /tmp/badger-v1.5.5 .
$ git checkout v1.6.0
$ go build -o /tmp/badger-v1.6.0 .
$ git checkout v2.0.0-rc3
$ go build -o /tmp/badger-v2.0.0.
```
Data is in `/tmp/old/`
```
$ /tmp/badger-v1.5.5 -dir /tmp/old/ -f /tmp/backup
Listening for /debug HTTP requests at port: 8080
Port busy. Trying another one...
Listening for /debug HTTP requests at port: 8081
2019/10/04 02:19:32 Replaying from value pointer: {Fid:0 Len:44 Offset:221201341}
2019/10/04 02:19:32 Iterating file id: 0
2019/10/04 02:19:32 Iteration took: 11.493µs
$ ls -lh /tmp/backup
-rw-r--r-- 1 jmansfield 501 206M Oct 4 02:19 /tmp/backup
$ mkdir /tmp/new/
$ /tmp/badger-v2.0.0 restore --dir /tmp/new/ -f /tmp/backup
Listening for /debug HTTP requests at port: 8080
Port busy. Trying another one...
Listening for /debug HTTP requests at port: 8081
badger 2019/10/04 02:21:49 INFO: All 0 tables opened in 0s
badger 2019/10/04 02:21:49 INFO: Got compaction priority: {level:0 score:1.73 dropPrefix:[]}
Error: proto: illegal wireType 7
Usage:
badger restore [flags]
Flags:
-f, --backup-file string File to restore from (default ""badger.bak"")
-h, --help help for restore
-w, --max-pending-writes int Max number of pending writes at any time while restore (default 256)
Global Flags:
--dir string Directory where the LSM tree files are located. (required)
--vlog-dir string Directory where the value log files are located, if different from --dir
proto: illegal wireType 7
$ rm /tmp/backup
$ rm /tmp/new/*
$ /tmp/badger-v1.6.0 backup --dir /tmp/old/ -f /tmp/backup
Listening for /debug HTTP requests at port: 8080
Port busy. Trying another one...
Listening for /debug HTTP requests at port: 8081
badger 2019/10/04 02:23:41 INFO: All 1 tables opened in 116ms
badger 2019/10/04 02:23:41 INFO: Replaying file id: 0 at offset: 221201385
badger 2019/10/04 02:23:41 INFO: Replay took: 13.803µs
badger 2019/10/04 02:23:41 INFO: DB.Backup Created batch of size: 4.2 MB in 3.748252ms.
badger 2019/10/04 02:23:41 INFO: DB.Backup Created batch of size: 4.2 MB in 2.725004ms.
badger 2019/10/04 02:23:41 INFO: DB.Backup Created batch of size: 4.2 MB in 4.651587ms.
badger 2019/10/04 02:23:41 INFO: DB.Backup Created batch of size: 4.3 MB in 10.003926ms.
badger 2019/10/04 02:23:41 INFO: DB.Backup Created batch of size: 4.3 MB in 10.617858ms.
badger 2019/10/04 02:23:41 INFO: DB.Backup Created batch of size: 4.3 MB in 9.836249ms.
badger 2019/10/04 02:23:41 INFO: DB.Backup Created batch of size: 4.3 MB in 7.726215ms.
badger 2019/10/04 02:23:41 INFO: DB.Backup Created batch of size: 4.3 MB in 8.288252ms.
badger 2019/10/04 02:23:41 INFO: DB.Backup Created batch of size: 4.3 MB in 8.235459ms.
badger 2019/10/04 02:23:41 INFO: DB.Backup Created batch of size: 4.3 MB in 6.29205ms.
badger 2019/10/04 02:23:41 INFO: DB.Backup Created batch of size: 4.2 MB in 3.652964ms.
badger 2019/10/04 02:23:41 INFO: DB.Backup Created batch of size: 4.2 MB in 3.849211ms.
badger 2019/10/04 02:23:41 INFO: DB.Backup Created batch of size: 4.2 MB in 4.060754ms.
badger 2019/10/04 02:23:41 INFO: DB.Backup Created batch of size: 4.2 MB in 4.05686ms.
badger 2019/10/04 02:23:41 INFO: DB.Backup Created batch of size: 4.2 MB in 4.305175ms.
badger 2019/10/04 02:23:41 INFO: DB.Backup Created batch of size: 4.2 MB in 45.192963ms.
badger 2019/10/04 02:23:41 INFO: DB.Backup Created batch of size: 25 MB in 15.394368ms.
badger 2019/10/04 02:23:41 INFO: DB.Backup Created batch of size: 21 MB in 12.44265ms.
badger 2019/10/04 02:23:41 INFO: DB.Backup Created batch of size: 17 MB in 12.794977ms.
badger 2019/10/04 02:23:41 INFO: DB.Backup Created batch of size: 13 MB in 47.555567ms.
badger 2019/10/04 02:23:41 INFO: DB.Backup Created batch of size: 38 MB in 22.24327ms.
badger 2019/10/04 02:23:41 INFO: DB.Backup Created batch of size: 4.2 MB in 3.004712ms.
badger 2019/10/04 02:23:41 INFO: DB.Backup Created batch of size: 4.2 MB in 3.239008ms.
badger 2019/10/04 02:23:41 INFO: DB.Backup Created batch of size: 4.2 MB in 2.023829ms.
badger 2019/10/04 02:23:41 INFO: DB.Backup Created batch of size: 4.2 MB in 1.900745ms.
badger 2019/10/04 02:23:41 INFO: DB.Backup Created batch of size: 4.2 MB in 43.769075ms.
badger 2019/10/04 02:23:41 INFO: DB.Backup Created batch of size: 13 MB in 9.170276ms.
badger 2019/10/04 02:23:41 INFO: DB.Backup Sent 427900 keys
badger 2019/10/04 02:23:42 INFO: Got compaction priority: {level:0 score:1.73 dropPrefix:[]}
$ ls -lh /tmp/backup
-rw-r--r-- 1 jmansfield 501 206M Oct 4 02:23 /tmp/backup
$ /tmp/badger-v2.0.0 restore --dir /tmp/new/ -f /tmp/backup
Listening for /debug HTTP requests at port: 8080
Port busy. Trying another one...
Listening for /debug HTTP requests at port: 8081
badger 2019/10/04 02:24:16 INFO: All 0 tables opened in 0s
badger 2019/10/04 02:24:18 DEBUG: Storing value log head: {Fid:0 Len:1918 Offset:215714908}
badger 2019/10/04 02:24:18 INFO: Got compaction priority: {level:0 score:1.73 dropPrefix:[]}
badger 2019/10/04 02:24:18 INFO: Running for level: 0
badger 2019/10/04 02:24:18 DEBUG: LOG Compact. Added 427901 keys. Skipped 0 keys. Iteration took: 227.260848ms
badger 2019/10/04 02:24:18 DEBUG: Discard stats: map[]
badger 2019/10/04 02:24:18 INFO: LOG Compact 0->1, del 1 tables, add 1 tables, took 350.923977ms
badger 2019/10/04 02:24:18 INFO: Compaction for level: 0 DONE
badger 2019/10/04 02:24:18 INFO: Force compaction on level 0 done
$ ls -lh /tmp/new/
total 232M
-rw-r--r-- 1 jmansfield 501 206M Oct 4 02:24 000000.vlog
-rw-r--r-- 1 jmansfield 501 26M Oct 4 02:24 000002.sst
-rw-r--r-- 1 jmansfield 501 48 Oct 4 02:24 MANIFEST
```
### What did you expect to see?
The data get converted using 1.5.5 to v2.0.0
### What did you see instead?
An error about wireType unless the `backup` operation was performed with v1.6.0
I consider this to be an issue with documentation, not the tool. The conversion instructions in the documentation specify v1.5.5. That other doesn't work or doesn't _always_ work.",0,backup restore fails with what version of go are you using go version go version go version linux what version of badger are you using does this issue reproduce with the latest master yes ish what are the hardware specifications of the machine ram os disk what did you do trying to use the instructions to convert a pre database to i get an error about the wiretype i suspect this may be related to as mentioned in in badger badger dir git checkout go build o tmp badger git checkout go build o tmp badger git checkout go build o tmp badger data is in tmp old tmp badger dir tmp old f tmp backup listening for debug http requests at port port busy trying another one listening for debug http requests at port replaying from value pointer fid len offset iterating file id iteration took ls lh tmp backup rw r r jmansfield oct tmp backup mkdir tmp new tmp badger restore dir tmp new f tmp backup listening for debug http requests at port port busy trying another one listening for debug http requests at port badger info all tables opened in badger info got compaction priority level score dropprefix error proto illegal wiretype usage badger restore flags f backup file string file to restore from default badger bak h help help for restore w max pending writes int max number of pending writes at any time while restore default global flags dir string directory where the lsm tree files are located required vlog dir string directory where the value log files are located if different from dir proto illegal wiretype rm tmp backup rm tmp new tmp badger backup dir tmp old f tmp backup listening for debug http requests at port port busy trying another one listening for debug http requests at port badger info all tables opened in badger info replaying file id at offset badger info replay took badger info db backup created batch of size mb in badger info db backup created batch of size mb in badger info db backup created batch of size mb in badger info db backup created batch of size mb in badger info db backup created batch of size mb in badger info db backup created batch of size mb in badger info db backup created batch of size mb in badger info db backup created batch of size mb in badger info db backup created batch of size mb in badger info db backup created batch of size mb in badger info db backup created batch of size mb in badger info db backup created batch of size mb in badger info db backup created batch of size mb in badger info db backup created batch of size mb in badger info db backup created batch of size mb in badger info db backup created batch of size mb in badger info db backup created batch of size mb in badger info db backup created batch of size mb in badger info db backup created batch of size mb in badger info db backup created batch of size mb in badger info db backup created batch of size mb in badger info db backup created batch of size mb in badger info db backup created batch of size mb in badger info db backup created batch of size mb in badger info db backup created batch of size mb in badger info db backup created batch of size mb in badger info db backup created batch of size mb in badger info db backup sent keys badger info got compaction priority level score dropprefix ls lh tmp backup rw r r jmansfield oct tmp backup tmp badger restore dir tmp new f tmp backup listening for debug http requests at port port busy trying another one listening for debug http requests at port badger info all tables opened in badger debug storing value log head fid len offset badger info got compaction priority level score dropprefix badger info running for level badger debug log compact added keys skipped keys iteration took badger debug discard stats map badger info log compact del tables add tables took badger info compaction for level done badger info force compaction on level done ls lh tmp new total rw r r jmansfield oct vlog rw r r jmansfield oct sst rw r r jmansfield oct manifest what did you expect to see the data get converted using to what did you see instead an error about wiretype unless the backup operation was performed with i consider this to be an issue with documentation not the tool the conversion instructions in the documentation specify that other doesn t work or doesn t always work ,0
1766,26028786891.0,IssuesEvent,2022-12-21 18:53:44,AzureAD/microsoft-authentication-library-for-dotnet,https://api.github.com/repos/AzureAD/microsoft-authentication-library-for-dotnet,closed,[Feature Request] Improve logging,enhancement Feature Request Supportability needs-spec,"Logging at `LogLevel.Info` is too verbose and the logs are unstructured. A simple request to get a token using a client secret logs 18 events. In a service that requests many tokens, writing 18 events per token request adds a lot of noise to the service logs.
Logs are sent to the `LogCallback` delegate as an unstructured string. In a service that has structured logs, passing the date, time, OS version and library version as a single string on each request adds a lot of duplication and makes the logs hard to integrate with other service logging.
**Describe the solution you'd like**
- Change `LogLevel.Info` to log just a summary of a token request, for example one event when the request is started containing the request details and a second event when the request succeeds / fails containing the response details and the caching behavior
- Move the logs for each step of the token acquisition process to `LogLevel.Verbose`
- Pass the log details to `LogCallback` as a structure rather than a string, separating out the various parts into fields
- Give event names or event IDs to the various events to allow services to filter events
**Describe alternatives you've considered**
- Parsing the log string with a regular expression to extract the useful data
- Only writing logs at `LogLevel.Warning` and `LogLevel.Error` to service logs and logging requests by injecting a handler into the HTTP pipeline",True,"[Feature Request] Improve logging - Logging at `LogLevel.Info` is too verbose and the logs are unstructured. A simple request to get a token using a client secret logs 18 events. In a service that requests many tokens, writing 18 events per token request adds a lot of noise to the service logs.
Logs are sent to the `LogCallback` delegate as an unstructured string. In a service that has structured logs, passing the date, time, OS version and library version as a single string on each request adds a lot of duplication and makes the logs hard to integrate with other service logging.
**Describe the solution you'd like**
- Change `LogLevel.Info` to log just a summary of a token request, for example one event when the request is started containing the request details and a second event when the request succeeds / fails containing the response details and the caching behavior
- Move the logs for each step of the token acquisition process to `LogLevel.Verbose`
- Pass the log details to `LogCallback` as a structure rather than a string, separating out the various parts into fields
- Give event names or event IDs to the various events to allow services to filter events
**Describe alternatives you've considered**
- Parsing the log string with a regular expression to extract the useful data
- Only writing logs at `LogLevel.Warning` and `LogLevel.Error` to service logs and logging requests by injecting a handler into the HTTP pipeline",1, improve logging logging at loglevel info is too verbose and the logs are unstructured a simple request to get a token using a client secret logs events in a service that requests many tokens writing events per token request adds a lot of noise to the service logs logs are sent to the logcallback delegate as an unstructured string in a service that has structured logs passing the date time os version and library version as a single string on each request adds a lot of duplication and makes the logs hard to integrate with other service logging describe the solution you d like change loglevel info to log just a summary of a token request for example one event when the request is started containing the request details and a second event when the request succeeds fails containing the response details and the caching behavior move the logs for each step of the token acquisition process to loglevel verbose pass the log details to logcallback as a structure rather than a string separating out the various parts into fields give event names or event ids to the various events to allow services to filter events describe alternatives you ve considered parsing the log string with a regular expression to extract the useful data only writing logs at loglevel warning and loglevel error to service logs and logging requests by injecting a handler into the http pipeline,1
312655,9551019838.0,IssuesEvent,2019-05-02 13:35:05,bbc/simorgh,https://api.github.com/repos/bbc/simorgh,opened,Vulnerability in execa dependency,bug high priority,"**Describe the bug**
```
✗ Medium severity vulnerability found in execa
Description: Arbitrary Command Injection
Info: https://snyk.io/vuln/SNYK-JS-EXECA-174564
Introduced through: webpack-cli@3.3.0
From: webpack-cli@3.3.0 > yargs@12.0.5 > os-locale@3.1.0 > execa@1.0.0
Organisation: bbc
Package manager: npm
Target file: package-lock.json
Open source: no
Project path: simorgh
Tested 738 dependencies for known vulnerabilities, found 1 vulnerability, 1 vulnerable path.
Run `snyk wizard` to address these issues.
```
**To Reproduce**
Steps to reproduce the behavior:
1. locally run `npm run snyk`
2. See error
**Expected behavior**
No error
**Screenshots**
",1.0,"Vulnerability in execa dependency - **Describe the bug**
```
✗ Medium severity vulnerability found in execa
Description: Arbitrary Command Injection
Info: https://snyk.io/vuln/SNYK-JS-EXECA-174564
Introduced through: webpack-cli@3.3.0
From: webpack-cli@3.3.0 > yargs@12.0.5 > os-locale@3.1.0 > execa@1.0.0
Organisation: bbc
Package manager: npm
Target file: package-lock.json
Open source: no
Project path: simorgh
Tested 738 dependencies for known vulnerabilities, found 1 vulnerability, 1 vulnerable path.
Run `snyk wizard` to address these issues.
```
**To Reproduce**
Steps to reproduce the behavior:
1. locally run `npm run snyk`
2. See error
**Expected behavior**
No error
**Screenshots**
",0,vulnerability in execa dependency describe the bug ✗ medium severity vulnerability found in execa description arbitrary command injection info introduced through webpack cli from webpack cli yargs os locale execa organisation bbc package manager npm target file package lock json open source no project path simorgh tested dependencies for known vulnerabilities found vulnerability vulnerable path run snyk wizard to address these issues to reproduce steps to reproduce the behavior locally run npm run snyk see error expected behavior no error screenshots img width alt screen shot at src ,0
64485,8737558615.0,IssuesEvent,2018-12-11 23:00:35,awslabs/aws-sam-cli,https://api.github.com/repos/awslabs/aws-sam-cli,closed,Examples link in Readme broken,area/docs stage/in-review type/documentation,"In the [Examples Section](https://github.com/awslabs/aws-sam-cli#examples) of the Readme, there's a link to a `samples` folder, which is broken.
Where should we send users to instead?",1.0,"Examples link in Readme broken - In the [Examples Section](https://github.com/awslabs/aws-sam-cli#examples) of the Readme, there's a link to a `samples` folder, which is broken.
Where should we send users to instead?",0,examples link in readme broken in the of the readme there s a link to a samples folder which is broken where should we send users to instead ,0
161141,12531762274.0,IssuesEvent,2020-06-04 14:58:39,elastic/elasticsearch,https://api.github.com/repos/elastic/elasticsearch,closed,:x-pack:qa:rolling-upgrade:v6.8.10#upgradedClusterTest: IO error while waiting cluster,:Distributed/Network >test-failure Team:Distributed,"**Build scan**: https://gradle-enterprise.elastic.co/s/yvd4eyecgtota
**Repro line**: `./gradlew -p x-pack/qa/rolling-upgrade check`
**Reproduces locally?**: yes. wait. no. I'm confused.
**Applicable branches**: 7.x
**Failure history**:
I don't see it in build-stats but I'm not good at searching. I've seen it several times today.
**Failure excerpt**:
```
» Caused by: org.elasticsearch.transport.ActionNotFoundTransportException: No handler for action [indices:admin/data_stream/delete]
```
...
```
» Caused by: org.elasticsearch.ElasticsearchException: node doesn't have meta data for index [my_old_index/Vn7jbwk0TF6Bh9MubpGT0Q]
» at org.elasticsearch.indices.store.TransportNodesListShardStoreMetadata.listStoreMetadata(TransportNodesListShardStoreMetadata.java:160) ~[?:?]
```
",1.0,":x-pack:qa:rolling-upgrade:v6.8.10#upgradedClusterTest: IO error while waiting cluster - **Build scan**: https://gradle-enterprise.elastic.co/s/yvd4eyecgtota
**Repro line**: `./gradlew -p x-pack/qa/rolling-upgrade check`
**Reproduces locally?**: yes. wait. no. I'm confused.
**Applicable branches**: 7.x
**Failure history**:
I don't see it in build-stats but I'm not good at searching. I've seen it several times today.
**Failure excerpt**:
```
» Caused by: org.elasticsearch.transport.ActionNotFoundTransportException: No handler for action [indices:admin/data_stream/delete]
```
...
```
» Caused by: org.elasticsearch.ElasticsearchException: node doesn't have meta data for index [my_old_index/Vn7jbwk0TF6Bh9MubpGT0Q]
» at org.elasticsearch.indices.store.TransportNodesListShardStoreMetadata.listStoreMetadata(TransportNodesListShardStoreMetadata.java:160) ~[?:?]
```
",0, x pack qa rolling upgrade upgradedclustertest io error while waiting cluster build scan repro line gradlew p x pack qa rolling upgrade check reproduces locally yes wait no i m confused applicable branches x failure history i don t see it in build stats but i m not good at searching i ve seen it several times today failure excerpt » caused by org elasticsearch transport actionnotfoundtransportexception no handler for action » caused by org elasticsearch elasticsearchexception node doesn t have meta data for index » at org elasticsearch indices store transportnodeslistshardstoremetadata liststoremetadata transportnodeslistshardstoremetadata java ,0
536,7542437776.0,IssuesEvent,2018-04-17 13:01:04,orientechnologies/orientdb,https://api.github.com/repos/orientechnologies/orientdb,closed,SEVER Internal server error,question supportability,"## OrientDB Version, operating system, or hardware.
- v2.2,5
## Operating System
- [x] Linux
- [ ] MacOSX
- [ ] Windows
- [ ] Other Unix
- [ ] Other, name?
## Expected behavior and actual behavior
Hi,
This is the only log what we got from orientdb when tried to query the database. No other information is available.
How to gain more insight of the ""internal server error""?
I would expect a lenghty stacktrace or some other pointers, but nothing is to be seen in the log.
## Steps to reproduce the problem
",True,"SEVER Internal server error - ## OrientDB Version, operating system, or hardware.
- v2.2,5
## Operating System
- [x] Linux
- [ ] MacOSX
- [ ] Windows
- [ ] Other Unix
- [ ] Other, name?
## Expected behavior and actual behavior
Hi,
This is the only log what we got from orientdb when tried to query the database. No other information is available.
How to gain more insight of the ""internal server error""?
I would expect a lenghty stacktrace or some other pointers, but nothing is to be seen in the log.
## Steps to reproduce the problem
",1,sever internal server error orientdb version operating system or hardware operating system linux macosx windows other unix other name expected behavior and actual behavior hi this is the only log what we got from orientdb when tried to query the database no other information is available how to gain more insight of the internal server error i would expect a lenghty stacktrace or some other pointers but nothing is to be seen in the log steps to reproduce the problem ,1
118,3357814943.0,IssuesEvent,2015-11-19 04:38:55,dotnet/roslyn,https://api.github.com/repos/dotnet/roslyn,closed,Automatic referencing of mscorlib on coreclr,Area-Compilers core-clr Feature Request Portability,"Unless the [/nostdlib](https://msdn.microsoft.com/en-us/library/fa13yay7.aspx) option is passed the C# compiler will automatically add a reference to mscorlib for the compilation. The compiler assumes that mscorlib exist inside [GetCORSystemDirectory](https://msdn.microsoft.com/en-us/library/k0588yw5%28v=vs.110%29.aspx) / [RuntimeEnvironment::GetRuntimeDirectory](https://msdn.microsoft.com/en-us/library/system.runtime.interopservices.runtimeenvironment.getruntimedirectory%28v=vs.90%29.aspx). No checking is performed on this, it simply assumes the file exists in that location.
This process doesn't make sense when the compiler is run under coreclr for a couple of reasons:
- There is no mscorlib in coreclr scenarios. Instead there are a set of contract assemblies to be referenced.
- There is no SDK directory to search for. The CoreCLR deployments for the compiler will include the runtime assemblies, not the contract assemblies.
Note that relying to the desktop APIs even when running under coreclr is not really an option. They won't exist on xcopy installs or cross platform.
We need to come up with a suitable cross plat / coreclr strategy for this scenario.
",True,"Automatic referencing of mscorlib on coreclr - Unless the [/nostdlib](https://msdn.microsoft.com/en-us/library/fa13yay7.aspx) option is passed the C# compiler will automatically add a reference to mscorlib for the compilation. The compiler assumes that mscorlib exist inside [GetCORSystemDirectory](https://msdn.microsoft.com/en-us/library/k0588yw5%28v=vs.110%29.aspx) / [RuntimeEnvironment::GetRuntimeDirectory](https://msdn.microsoft.com/en-us/library/system.runtime.interopservices.runtimeenvironment.getruntimedirectory%28v=vs.90%29.aspx). No checking is performed on this, it simply assumes the file exists in that location.
This process doesn't make sense when the compiler is run under coreclr for a couple of reasons:
- There is no mscorlib in coreclr scenarios. Instead there are a set of contract assemblies to be referenced.
- There is no SDK directory to search for. The CoreCLR deployments for the compiler will include the runtime assemblies, not the contract assemblies.
Note that relying to the desktop APIs even when running under coreclr is not really an option. They won't exist on xcopy installs or cross platform.
We need to come up with a suitable cross plat / coreclr strategy for this scenario.
",1,automatic referencing of mscorlib on coreclr unless the option is passed the c compiler will automatically add a reference to mscorlib for the compilation the compiler assumes that mscorlib exist inside no checking is performed on this it simply assumes the file exists in that location this process doesn t make sense when the compiler is run under coreclr for a couple of reasons there is no mscorlib in coreclr scenarios instead there are a set of contract assemblies to be referenced there is no sdk directory to search for the coreclr deployments for the compiler will include the runtime assemblies not the contract assemblies note that relying to the desktop apis even when running under coreclr is not really an option they won t exist on xcopy installs or cross platform we need to come up with a suitable cross plat coreclr strategy for this scenario ,1
1975,30881299253.0,IssuesEvent,2023-08-03 17:49:10,tree-sitter/tree-sitter,https://api.github.com/repos/tree-sitter/tree-sitter,closed,Cannot compile tree-sitter with Clang and MinGW on Windows,bug portability,"## Issues
There are multiple issues related to the incompatibility of Tree-sitter with Clang and MinGW on Windows:
- [x] When running `tree-sitter test`, the code assumes that MSVC is used on Windows and flags, which is not a correct assumption. Flags not supported by other compilers like Clang and Mingw are used for compilation (e.g. `-fPIC`). Fixed by pull request #1835
- [x] `fdopen`, which is a non-standard POSIX function, is used on Windows. This causes compilation errors when `-Werror` is enabled with Clang on Windows because this warning is enabled by default. Fixed by pull request #1411
```cpp
warning: In file included from src\lib.c:12:
warning: src/./parser.c:1781:28: warning: 'fdopen' is deprecated: The POSIX name for this item is deprecated. Instead, use the ISO C and C++ conformant name: _fdopen. See online help for details. [-Wdeprecated-declarations]
warning: self->dot_graph_file = fdopen(fd, ""a"");
warning: ^
warning: C:\Program Files (x86)\Windows Kits\10\Include\10.0.19041.0\ucrt\stdio.h:2431:28: note: 'fdopen' has been explicitly marked deprecated here
warning: _Check_return_ _CRT_NONSTDC_DEPRECATE(_fdopen) _ACRTIMP FILE* __cdecl fdopen(_In_ int _FileHandle, _In_z_ char const* _Format);
warning: ^
warning: C:\Program Files (x86)\Windows Kits\10\Include\10.0.19041.0\ucrt\corecrt.h:414:50: note: expanded from macro '_CRT_NONSTDC_DEPRECATE'
warning: #define _CRT_NONSTDC_DEPRECATE(_NewName) _CRT_DEPRECATE_TEXT( \
warning: ^
warning: C:\Program Files (x86)\Microsoft Visual Studio\2019\Preview\VC\Tools\MSVC\14.29.30133\include\vcruntime.h:310:47: note: expanded from macro '_CRT_DEPRECATE_TEXT'
warning: #define _CRT_DEPRECATE_TEXT(_Text) __declspec(deprecated(_Text))
```
- [x] 'ts_stack__add_slice' is used in an inline function with external linkage. This is technically an error because the inline functions should not be linked statically. This causes compilation errors when `-Werror` is enabled with Clang on Windows because this warning is enabled by default. Fixed by pull request #1411
```cpp
warning: In file included from src\lib.c:14:
warning: src/./stack.c:311:9: warning: static function 'ts_stack__add_slice' is used in an inline function with external linkage [-Wstatic-in-inline]
warning: ts_stack__add_slice(
warning: ^
warning: src/./stack.c:274:1: note: use 'static' to give inline function 'stack__iter' internal linkage
warning: inline StackSliceArray stack__iter(Stack *self, StackVersion version,
warning: ^
warning: static
warning: src/./stack.c:15:16: note: expanded from macro 'inline'
warning: #define inline __forceinline
warning: ^
warning: src/./stack.c:258:13: note: 'ts_stack__add_slice' declared here
warning: static void ts_stack__add_slice(Stack *self, StackVersion original_version,
warning: ^
warning: 1 warning generated.
```
",True,"Cannot compile tree-sitter with Clang and MinGW on Windows - ## Issues
There are multiple issues related to the incompatibility of Tree-sitter with Clang and MinGW on Windows:
- [x] When running `tree-sitter test`, the code assumes that MSVC is used on Windows and flags, which is not a correct assumption. Flags not supported by other compilers like Clang and Mingw are used for compilation (e.g. `-fPIC`). Fixed by pull request #1835
- [x] `fdopen`, which is a non-standard POSIX function, is used on Windows. This causes compilation errors when `-Werror` is enabled with Clang on Windows because this warning is enabled by default. Fixed by pull request #1411
```cpp
warning: In file included from src\lib.c:12:
warning: src/./parser.c:1781:28: warning: 'fdopen' is deprecated: The POSIX name for this item is deprecated. Instead, use the ISO C and C++ conformant name: _fdopen. See online help for details. [-Wdeprecated-declarations]
warning: self->dot_graph_file = fdopen(fd, ""a"");
warning: ^
warning: C:\Program Files (x86)\Windows Kits\10\Include\10.0.19041.0\ucrt\stdio.h:2431:28: note: 'fdopen' has been explicitly marked deprecated here
warning: _Check_return_ _CRT_NONSTDC_DEPRECATE(_fdopen) _ACRTIMP FILE* __cdecl fdopen(_In_ int _FileHandle, _In_z_ char const* _Format);
warning: ^
warning: C:\Program Files (x86)\Windows Kits\10\Include\10.0.19041.0\ucrt\corecrt.h:414:50: note: expanded from macro '_CRT_NONSTDC_DEPRECATE'
warning: #define _CRT_NONSTDC_DEPRECATE(_NewName) _CRT_DEPRECATE_TEXT( \
warning: ^
warning: C:\Program Files (x86)\Microsoft Visual Studio\2019\Preview\VC\Tools\MSVC\14.29.30133\include\vcruntime.h:310:47: note: expanded from macro '_CRT_DEPRECATE_TEXT'
warning: #define _CRT_DEPRECATE_TEXT(_Text) __declspec(deprecated(_Text))
```
- [x] 'ts_stack__add_slice' is used in an inline function with external linkage. This is technically an error because the inline functions should not be linked statically. This causes compilation errors when `-Werror` is enabled with Clang on Windows because this warning is enabled by default. Fixed by pull request #1411
```cpp
warning: In file included from src\lib.c:14:
warning: src/./stack.c:311:9: warning: static function 'ts_stack__add_slice' is used in an inline function with external linkage [-Wstatic-in-inline]
warning: ts_stack__add_slice(
warning: ^
warning: src/./stack.c:274:1: note: use 'static' to give inline function 'stack__iter' internal linkage
warning: inline StackSliceArray stack__iter(Stack *self, StackVersion version,
warning: ^
warning: static
warning: src/./stack.c:15:16: note: expanded from macro 'inline'
warning: #define inline __forceinline
warning: ^
warning: src/./stack.c:258:13: note: 'ts_stack__add_slice' declared here
warning: static void ts_stack__add_slice(Stack *self, StackVersion original_version,
warning: ^
warning: 1 warning generated.
```
",1,cannot compile tree sitter with clang and mingw on windows issues there are multiple issues related to the incompatibility of tree sitter with clang and mingw on windows when running tree sitter test the code assumes that msvc is used on windows and flags which is not a correct assumption flags not supported by other compilers like clang and mingw are used for compilation e g fpic fixed by pull request fdopen which is a non standard posix function is used on windows this causes compilation errors when werror is enabled with clang on windows because this warning is enabled by default fixed by pull request cpp warning in file included from src lib c warning src parser c warning fdopen is deprecated the posix name for this item is deprecated instead use the iso c and c conformant name fdopen see online help for details warning self dot graph file fdopen fd a warning warning c program files windows kits include ucrt stdio h note fdopen has been explicitly marked deprecated here warning check return crt nonstdc deprecate fdopen acrtimp file cdecl fdopen in int filehandle in z char const format warning warning c program files windows kits include ucrt corecrt h note expanded from macro crt nonstdc deprecate warning define crt nonstdc deprecate newname crt deprecate text warning warning c program files microsoft visual studio preview vc tools msvc include vcruntime h note expanded from macro crt deprecate text warning define crt deprecate text text declspec deprecated text ts stack add slice is used in an inline function with external linkage this is technically an error because the inline functions should not be linked statically this causes compilation errors when werror is enabled with clang on windows because this warning is enabled by default fixed by pull request cpp warning in file included from src lib c warning src stack c warning static function ts stack add slice is used in an inline function with external linkage warning ts stack add slice warning warning src stack c note use static to give inline function stack iter internal linkage warning inline stackslicearray stack iter stack self stackversion version warning warning static warning src stack c note expanded from macro inline warning define inline forceinline warning warning src stack c note ts stack add slice declared here warning static void ts stack add slice stack self stackversion original version warning warning warning generated ,1
252848,21633906725.0,IssuesEvent,2022-05-05 12:36:54,gravitee-io/issues,https://api.github.com/repos/gravitee-io/issues,closed,Error when importing an API without logging on an env with `Logging audit events` activated,type: bug project: APIM Support 2 p2 loop quantum status: in test,"## :collision: Describe the bug
When trying to import an API without `logging` part on an env with `Generate API Logging audit events (API_LOGGING_ENABLED, API_LOGGING_DISABLED, API_LOGGING_UPDATED)` option activated (available under Env Settings > Gateway - API Logging menu), it throws the following error:
```java
11:48:12.387 \[gravitee-listener-671\] ERROR i.g.r.a.service.impl.ApiServiceImpl - An error occurs while auditing API logging configuration for API: 8e6c1778-9a1e-41b1-ac17-789a1e41b101
11:48:12.387 \[gravitee-listener-671\] ERROR i.g.r.a.service.impl.ApiServiceImpl - An error occurs while auditing API logging configuration for API: 8e6c1778-9a1e-41b1-ac17-789a1e41b101
java.lang.NullPointerException: null
at io.gravitee.rest.api.service.impl.ApiServiceImpl.auditApiLogging(ApiServiceImpl.java:3400)
at io.gravitee.rest.api.service.impl.ApiServiceImpl.update(ApiServiceImpl.java:1649)
at jdk.internal.reflect.GeneratedMethodAccessor1997.invoke(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
```
## :sunrise_over_mountains: To Reproduce
Steps to reproduce the behaviour:
1. Create an API, activate the logging and export it
2. Go to Env Settings > Gateway - API Logging menu and activate `Generate API Logging audit events`
3. Update the exported API to remove the `logging` section
4. Go back on the API (in the console) and try to import the updated file
4. See error
## :rainbow: Expected behaviour
Everything should be fine, the API should be updated and logging deactivated for it.
## Current behavior
Import is failing.
## :movie_camera: Useful information
Do not forget to activate
```
Generate API Logging audit events (API_LOGGING_ENABLED, API_LOGGING_DISABLED, API_LOGGING_UPDATED)
```

This NPE was spotted by SonarCloud:

## :computer: Desktop:
NA
## :warning: Potential impacts
***Which other features may be impacted by this fix. This could be populated after fix***
***What are the impacted versions?***
3.10.x +
## :link: Dependencies
Link a story or other related things...
",1.0,"Error when importing an API without logging on an env with `Logging audit events` activated - ## :collision: Describe the bug
When trying to import an API without `logging` part on an env with `Generate API Logging audit events (API_LOGGING_ENABLED, API_LOGGING_DISABLED, API_LOGGING_UPDATED)` option activated (available under Env Settings > Gateway - API Logging menu), it throws the following error:
```java
11:48:12.387 \[gravitee-listener-671\] ERROR i.g.r.a.service.impl.ApiServiceImpl - An error occurs while auditing API logging configuration for API: 8e6c1778-9a1e-41b1-ac17-789a1e41b101
11:48:12.387 \[gravitee-listener-671\] ERROR i.g.r.a.service.impl.ApiServiceImpl - An error occurs while auditing API logging configuration for API: 8e6c1778-9a1e-41b1-ac17-789a1e41b101
java.lang.NullPointerException: null
at io.gravitee.rest.api.service.impl.ApiServiceImpl.auditApiLogging(ApiServiceImpl.java:3400)
at io.gravitee.rest.api.service.impl.ApiServiceImpl.update(ApiServiceImpl.java:1649)
at jdk.internal.reflect.GeneratedMethodAccessor1997.invoke(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
```
## :sunrise_over_mountains: To Reproduce
Steps to reproduce the behaviour:
1. Create an API, activate the logging and export it
2. Go to Env Settings > Gateway - API Logging menu and activate `Generate API Logging audit events`
3. Update the exported API to remove the `logging` section
4. Go back on the API (in the console) and try to import the updated file
4. See error
## :rainbow: Expected behaviour
Everything should be fine, the API should be updated and logging deactivated for it.
## Current behavior
Import is failing.
## :movie_camera: Useful information
Do not forget to activate
```
Generate API Logging audit events (API_LOGGING_ENABLED, API_LOGGING_DISABLED, API_LOGGING_UPDATED)
```

This NPE was spotted by SonarCloud:

## :computer: Desktop:
NA
## :warning: Potential impacts
***Which other features may be impacted by this fix. This could be populated after fix***
***What are the impacted versions?***
3.10.x +
## :link: Dependencies
Link a story or other related things...
",0,error when importing an api without logging on an env with logging audit events activated collision describe the bug when trying to import an api without logging part on an env with generate api logging audit events api logging enabled api logging disabled api logging updated option activated available under env settings gateway api logging menu it throws the following error java error i g r a service impl apiserviceimpl an error occurs while auditing api logging configuration for api error i g r a service impl apiserviceimpl an error occurs while auditing api logging configuration for api java lang nullpointerexception null at io gravitee rest api service impl apiserviceimpl auditapilogging apiserviceimpl java at io gravitee rest api service impl apiserviceimpl update apiserviceimpl java at jdk internal reflect invoke unknown source at java base jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java sunrise over mountains to reproduce steps to reproduce the behaviour create an api activate the logging and export it go to env settings gateway api logging menu and activate generate api logging audit events update the exported api to remove the logging section go back on the api in the console and try to import the updated file see error rainbow expected behaviour everything should be fine the api should be updated and logging deactivated for it current behavior import is failing movie camera useful information do not forget to activate generate api logging audit events api logging enabled api logging disabled api logging updated this npe was spotted by sonarcloud computer desktop na warning potential impacts which other features may be impacted by this fix this could be populated after fix what are the impacted versions x link dependencies link a story or other related things ,0