Unnamed: 0
int64 3
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 5
112
| repo_url
stringlengths 34
141
| action
stringclasses 3
values | title
stringlengths 2
430
| labels
stringlengths 4
347
| body
stringlengths 5
237k
| index
stringclasses 7
values | text_combine
stringlengths 96
237k
| label
stringclasses 2
values | text
stringlengths 96
219k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
13,686
| 16,375,895,669
|
IssuesEvent
|
2021-05-16 04:23:48
|
zlib-ng/zlib-ng
|
https://api.github.com/repos/zlib-ng/zlib-ng
|
closed
|
Building breaks when both _FILE_OFFSET_BITS=64 and ZLIB_COMPAT are defined
|
Compatibility
|
With the `develop` branch (`9f78490`) or the latest release version 2.0.2:
When both `ZLIB_COMPAT` and `_FILE_OFFSET_BITS=64` are defined, building breaks due to `Z_WANT64` getting defined and activating the definition of the macros that rename some function names to be suffixed with "64" which causes compiler errors due to multiple definitions of these functions.
On Solaris (SunOS 5.10, using GCC 5.2.0, at least), `_FILE_OFFSET_BITS` is automatically defined by the system headers when it is not already defined (`/usr/include/sys/feature_tests.h`), and so building for 64-bit breaks on Solaris currently.
It also breaks on Linux (Ubuntu 18.04 at least) if you do `-D_FILE_OFFSET_BITS=64`.
If I interpret the Large File Support specs correctly, explicitly defining `_FILE_OFFSET_BITS` is permitted, and so building with that should work.
E.g., here is one of the errors:
```
In file included from adler32.c:6:0:
adler32.c:132:32: error: redefinition of ‘adler32_combine64’
unsigned long Z_EXPORT PREFIX4(adler32_combine)(unsigned long adler1, unsigned long adler2, z_off64_t len2) {
^
zbuild.h:17:22: note: in definition of macro ‘PREFIX4’
# define PREFIX4(x) x ## 64
^
zlib.h:1802:29: note: previous definition of ‘adler32_combine64’ was here
# define adler32_combine adler32_combine64
^
zbuild.h:14:21: note: in definition of macro ‘PREFIX’
# define PREFIX(x) x
^
adler32.c:128:31: note: in expansion of macro ‘adler32_combine’
unsigned long Z_EXPORT PREFIX(adler32_combine)(unsigned long adler1, unsigned long adler2, z_off_t len2) {
^~~~~~~~~~~~~~~
Makefile:258: recipe for target 'adler32.o' failed
make: *** [adler32.o] Error 1
```
The problem is that after `PREFIX(ident)` expands to `ident` then `ident` refers to one of the renaming macros defined due to `Z_WANT64` and so `ident` then expands to `ident64`, but there is already another definition of `ident64`.
This can be reproduced in Ubuntu 18 (and I assume other Linux distros) by doing either:
```bash
cmake -D "CMAKE_C_FLAGS=-D_FILE_OFFSET_BITS=64" -D ZLIB_COMPAT=ON ../zlib-ng
cmake --build .
```
or
```bash
CFLAGS="-D_FILE_OFFSET_BITS=64" ../zlib-ng/configure --zlib-compat
make
```
And can be reproduced in Solaris 10 by simply trying to build without giving `-D_FILE_OFFSET_BITS=64` (and with giving `-m64` if needed).
|
True
|
Building breaks when both _FILE_OFFSET_BITS=64 and ZLIB_COMPAT are defined - With the `develop` branch (`9f78490`) or the latest release version 2.0.2:
When both `ZLIB_COMPAT` and `_FILE_OFFSET_BITS=64` are defined, building breaks due to `Z_WANT64` getting defined and activating the definition of the macros that rename some function names to be suffixed with "64" which causes compiler errors due to multiple definitions of these functions.
On Solaris (SunOS 5.10, using GCC 5.2.0, at least), `_FILE_OFFSET_BITS` is automatically defined by the system headers when it is not already defined (`/usr/include/sys/feature_tests.h`), and so building for 64-bit breaks on Solaris currently.
It also breaks on Linux (Ubuntu 18.04 at least) if you do `-D_FILE_OFFSET_BITS=64`.
If I interpret the Large File Support specs correctly, explicitly defining `_FILE_OFFSET_BITS` is permitted, and so building with that should work.
E.g., here is one of the errors:
```
In file included from adler32.c:6:0:
adler32.c:132:32: error: redefinition of ‘adler32_combine64’
unsigned long Z_EXPORT PREFIX4(adler32_combine)(unsigned long adler1, unsigned long adler2, z_off64_t len2) {
^
zbuild.h:17:22: note: in definition of macro ‘PREFIX4’
# define PREFIX4(x) x ## 64
^
zlib.h:1802:29: note: previous definition of ‘adler32_combine64’ was here
# define adler32_combine adler32_combine64
^
zbuild.h:14:21: note: in definition of macro ‘PREFIX’
# define PREFIX(x) x
^
adler32.c:128:31: note: in expansion of macro ‘adler32_combine’
unsigned long Z_EXPORT PREFIX(adler32_combine)(unsigned long adler1, unsigned long adler2, z_off_t len2) {
^~~~~~~~~~~~~~~
Makefile:258: recipe for target 'adler32.o' failed
make: *** [adler32.o] Error 1
```
The problem is that after `PREFIX(ident)` expands to `ident` then `ident` refers to one of the renaming macros defined due to `Z_WANT64` and so `ident` then expands to `ident64`, but there is already another definition of `ident64`.
This can be reproduced in Ubuntu 18 (and I assume other Linux distros) by doing either:
```bash
cmake -D "CMAKE_C_FLAGS=-D_FILE_OFFSET_BITS=64" -D ZLIB_COMPAT=ON ../zlib-ng
cmake --build .
```
or
```bash
CFLAGS="-D_FILE_OFFSET_BITS=64" ../zlib-ng/configure --zlib-compat
make
```
And can be reproduced in Solaris 10 by simply trying to build without giving `-D_FILE_OFFSET_BITS=64` (and with giving `-m64` if needed).
|
comp
|
building breaks when both file offset bits and zlib compat are defined with the develop branch or the latest release version when both zlib compat and file offset bits are defined building breaks due to z getting defined and activating the definition of the macros that rename some function names to be suffixed with which causes compiler errors due to multiple definitions of these functions on solaris sunos using gcc at least file offset bits is automatically defined by the system headers when it is not already defined usr include sys feature tests h and so building for bit breaks on solaris currently it also breaks on linux ubuntu at least if you do d file offset bits if i interpret the large file support specs correctly explicitly defining file offset bits is permitted and so building with that should work e g here is one of the errors in file included from c c error redefinition of ‘ ’ unsigned long z export combine unsigned long unsigned long z t zbuild h note in definition of macro ‘ ’ define x x zlib h note previous definition of ‘ ’ was here define combine zbuild h note in definition of macro ‘prefix’ define prefix x x c note in expansion of macro ‘ combine’ unsigned long z export prefix combine unsigned long unsigned long z off t makefile recipe for target o failed make error the problem is that after prefix ident expands to ident then ident refers to one of the renaming macros defined due to z and so ident then expands to but there is already another definition of this can be reproduced in ubuntu and i assume other linux distros by doing either bash cmake d cmake c flags d file offset bits d zlib compat on zlib ng cmake build or bash cflags d file offset bits zlib ng configure zlib compat make and can be reproduced in solaris by simply trying to build without giving d file offset bits and with giving if needed
| 1
|
33,146
| 6,159,874,354
|
IssuesEvent
|
2017-06-29 02:18:34
|
randdusing/cordova-plugin-bluetoothle
|
https://api.github.com/repos/randdusing/cordova-plugin-bluetoothle
|
closed
|
Where is the example?
|
documentation
|
Hi, I'm trying to use this plugin, but i can't find one single example how to use connect() or startScan() functions. Can u provide us some examples? Thank you.
|
1.0
|
Where is the example? - Hi, I'm trying to use this plugin, but i can't find one single example how to use connect() or startScan() functions. Can u provide us some examples? Thank you.
|
non_comp
|
where is the example hi i m trying to use this plugin but i can t find one single example how to use connect or startscan functions can u provide us some examples thank you
| 0
|
3,044
| 5,953,293,855
|
IssuesEvent
|
2017-05-27 06:12:22
|
LibertyForce-Gmod/Weapon-Properties-Editor
|
https://api.github.com/repos/LibertyForce-Gmod/Weapon-Properties-Editor
|
opened
|
Incompatible with "Manual Weapon Pickup"
|
incompatibility
|
Incompatible with [Manual Weapon Pickup](http://steamcommunity.com/sharedfiles/filedetails/?id=167545348).
[User report](http://steamcommunity.com/workshop/filedetails/discussion/933160196/1291817208490965521/#c1291817208493344026):
> This addon isn't compatible with replacements, it make replacements useless. For example, you replace hl2 pistol with some other custom weapon, but this addon will make you pick up hl2 pistol anyway.
Weapon replacements doesn't work.
|
True
|
Incompatible with "Manual Weapon Pickup" - Incompatible with [Manual Weapon Pickup](http://steamcommunity.com/sharedfiles/filedetails/?id=167545348).
[User report](http://steamcommunity.com/workshop/filedetails/discussion/933160196/1291817208490965521/#c1291817208493344026):
> This addon isn't compatible with replacements, it make replacements useless. For example, you replace hl2 pistol with some other custom weapon, but this addon will make you pick up hl2 pistol anyway.
Weapon replacements doesn't work.
|
comp
|
incompatible with manual weapon pickup incompatible with this addon isn t compatible with replacements it make replacements useless for example you replace pistol with some other custom weapon but this addon will make you pick up pistol anyway weapon replacements doesn t work
| 1
|
17,678
| 24,370,860,360
|
IssuesEvent
|
2022-10-03 19:09:25
|
bitcoindevkit/bdk-ffi
|
https://api.github.com/repos/bitcoindevkit/bdk-ffi
|
closed
|
Add ability to retrieve private master key from a BIP39 seed
|
ldk-compatibility
|
This can be used to seed an LDK [KeysManager](https://docs.rs/lightning/0.0.110/lightning/chain/keysinterface/struct.KeysManager.html) with the first 32 bytes from this 64-byte seed.
This enables the entropy we give to LDK to be within the backup system of the on-chain wallet, and allow for 12-to-24-word mnemonics with optional passphrases.
|
True
|
Add ability to retrieve private master key from a BIP39 seed - This can be used to seed an LDK [KeysManager](https://docs.rs/lightning/0.0.110/lightning/chain/keysinterface/struct.KeysManager.html) with the first 32 bytes from this 64-byte seed.
This enables the entropy we give to LDK to be within the backup system of the on-chain wallet, and allow for 12-to-24-word mnemonics with optional passphrases.
|
comp
|
add ability to retrieve private master key from a seed this can be used to seed an ldk with the first bytes from this byte seed this enables the entropy we give to ldk to be within the backup system of the on chain wallet and allow for to word mnemonics with optional passphrases
| 1
|
94,695
| 8,513,837,074
|
IssuesEvent
|
2018-10-31 16:59:44
|
friendica/friendica
|
https://api.github.com/repos/friendica/friendica
|
closed
|
PHP 7.0 Tests: Could not load mock Friendica\Database\DBStructure, class already exists
|
Bug Tests
|
See https://travis-ci.org/friendica/friendica/jobs/448349959
Paging @nupplaphil
|
1.0
|
PHP 7.0 Tests: Could not load mock Friendica\Database\DBStructure, class already exists - See https://travis-ci.org/friendica/friendica/jobs/448349959
Paging @nupplaphil
|
non_comp
|
php tests could not load mock friendica database dbstructure class already exists see paging nupplaphil
| 0
|
9,820
| 11,862,761,445
|
IssuesEvent
|
2020-03-25 18:29:25
|
ldtteam/minecolonies
|
https://api.github.com/repos/ldtteam/minecolonies
|
closed
|
Add Rack Compatibility with Mouse Tweaks
|
Compatibility: Mod
|
I'd like to use Mouse Tweaks to sort (usually with my middle-click) the inventory of a Rack.
Please and thanks!
|
True
|
Add Rack Compatibility with Mouse Tweaks - I'd like to use Mouse Tweaks to sort (usually with my middle-click) the inventory of a Rack.
Please and thanks!
|
comp
|
add rack compatibility with mouse tweaks i d like to use mouse tweaks to sort usually with my middle click the inventory of a rack please and thanks
| 1
|
40,536
| 16,502,630,145
|
IssuesEvent
|
2021-05-25 15:44:14
|
microsoft/vscode-cpptools
|
https://api.github.com/repos/microsoft/vscode-cpptools
|
closed
|
Extension causes high cpu load
|
Language Service more info needed performance
|
- Issue Type: `Performance`
- Extension Name: `cpptools`
- Extension Version: `1.3.1`
- OS Version: `Windows_NT x64 10.0.19042`
- VS Code version: `1.56.2`
:warning: Make sure to **attach** this file from your *home*-directory:
:warning:`d:\Users\Cuneyt.000\AppData\Local\Temp\ms-vscode.cpptools-unresponsive.cpuprofile.txt`
Find more details here: https://github.com/microsoft/vscode/wiki/Explain-extension-causes-
[ms-vscode.cpptools-unresponsive.cpuprofile.txt](https://github.com/microsoft/vscode-cpptools/files/6501656/ms-vscode.cpptools-unresponsive.cpuprofile.txt)
[austin.code-gnu-global-unresponsive.cpuprofile.txt](https://github.com/microsoft/vscode-cpptools/files/6501660/austin.code-gnu-global-unresponsive.cpuprofile.txt)
high-cpu-load
|
1.0
|
Extension causes high cpu load - - Issue Type: `Performance`
- Extension Name: `cpptools`
- Extension Version: `1.3.1`
- OS Version: `Windows_NT x64 10.0.19042`
- VS Code version: `1.56.2`
:warning: Make sure to **attach** this file from your *home*-directory:
:warning:`d:\Users\Cuneyt.000\AppData\Local\Temp\ms-vscode.cpptools-unresponsive.cpuprofile.txt`
Find more details here: https://github.com/microsoft/vscode/wiki/Explain-extension-causes-
[ms-vscode.cpptools-unresponsive.cpuprofile.txt](https://github.com/microsoft/vscode-cpptools/files/6501656/ms-vscode.cpptools-unresponsive.cpuprofile.txt)
[austin.code-gnu-global-unresponsive.cpuprofile.txt](https://github.com/microsoft/vscode-cpptools/files/6501660/austin.code-gnu-global-unresponsive.cpuprofile.txt)
high-cpu-load
|
non_comp
|
extension causes high cpu load issue type performance extension name cpptools extension version os version windows nt vs code version warning make sure to attach this file from your home directory warning d users cuneyt appdata local temp ms vscode cpptools unresponsive cpuprofile txt find more details here high cpu load
| 0
|
12,425
| 14,677,312,120
|
IssuesEvent
|
2020-12-30 22:53:06
|
scylladb/scylla
|
https://api.github.com/repos/scylladb/scylla
|
opened
|
Frozen set may be created out-of-order via a prepared statement
|
CQL bug cassandra 2.2 compatibility
|
In Scylla, "set" collections are sorted - by the element type's natural order.
Although this is not clearly documented as far as I can tell (?), clients may assume that this the case also for a **frozen** set, for example used as a partition key.
It appears that Scylla can forget to sort the frozen set when inserted using a prepared statement, although does sort it as expected when inserted inline in the statement. It appears that Cassandra correctly sorts the frozen set also for prepared statement.
Here is one subtle and elaborate test case which breaks using a Python driver like an ordinary client. Take a look at this test code:
```python
@pytest.fixture(scope="session")
def table1(cql, test_keyspace):
table = test_keyspace + "." + unique_name()
cql.execute(f"CREATE TABLE {table} (k frozen<map<set<int>, int>> PRIMARY KEY)")
yield table
cql.execute("DROP TABLE " + table)
def test_broken(cql, table1):
insert = cql.prepare(f"INSERT INTO {table1} (k) VALUES (?)")
cql.execute(insert, [{tuple([8, 9, 7]): 1}])
for row in cql.execute(f"SELECT * from {table1}"):
k = row.k
print(k)
if isinstance(k, OrderedMapSerializedKey):
print(k._index)
print(list(k.items())) # This line throws when run on Scylla, but works fine on Cassandra
```
The test has a nested frozen collection - a map of sets. When run against Cassandra this test works, but when run against Scylla, the `k.items()` call throws. The reason is very subtle: It appears that the Python driver keeps the frozen map keys - each itself is a set - serialized in the way it got them from the server. The items() function appears to do a wasteful back-and-forth conversion - it deserializes each key (resulting in a SortedSet object) and then serializes it again to look it up in the map. Somehow during these conversions, the driver re-sorts the elements, and it fails to look it up in the map. Concretely, the serialized key stored in the map is the wrongly sorted (note \0x08, \t, 0x07 - which means 8, 9, 7):
```
\x00\x00\x00\x03\x00\x00\x00\x04\x00\x00\x00\x08\x00\x00\x00\x04\x00\x00\x00\t\x00\x00\x00\x04\x00\x00\x00\x07
```
but the key the code looks up (and fails) is the sorted one (note \0x07, \x08, \t - meaning 7, 8, 9):
```
\x00\x00\x00\x03\x00\x00\x00\x04\x00\x00\x00\x07\x00\x00\x00\x04\x00\x00\x00\x08\x00\x00\x00\x04\x00\x00\x00\t
```
The test passes if instead of `tuple([8, 9, 7])` we pass `tuple([7, 8, 9])` to the prepared statement!
The test also passes if instead of prepared statement, we use an inline statement. Both commands below work the same:
```
cql.execute("INSERT INTO " + table1 + " (k) VALUES ({{7, 8, 9}: 1})")
cql.execute("INSERT INTO " + table1 + " (k) VALUES ({{8, 9, 7}: 1})")
```
Note that this test appears very contrived, but it is actually a greatly simplified version of a real test I tried to run (translated, perhaps not very well, from a Java test from Cassandra) which failed on Scylla, and this is a simplified example showing why. But I will try to create an even simpler test which only needs a `frozen<set>` without nesting.
|
True
|
Frozen set may be created out-of-order via a prepared statement - In Scylla, "set" collections are sorted - by the element type's natural order.
Although this is not clearly documented as far as I can tell (?), clients may assume that this the case also for a **frozen** set, for example used as a partition key.
It appears that Scylla can forget to sort the frozen set when inserted using a prepared statement, although does sort it as expected when inserted inline in the statement. It appears that Cassandra correctly sorts the frozen set also for prepared statement.
Here is one subtle and elaborate test case which breaks using a Python driver like an ordinary client. Take a look at this test code:
```python
@pytest.fixture(scope="session")
def table1(cql, test_keyspace):
table = test_keyspace + "." + unique_name()
cql.execute(f"CREATE TABLE {table} (k frozen<map<set<int>, int>> PRIMARY KEY)")
yield table
cql.execute("DROP TABLE " + table)
def test_broken(cql, table1):
insert = cql.prepare(f"INSERT INTO {table1} (k) VALUES (?)")
cql.execute(insert, [{tuple([8, 9, 7]): 1}])
for row in cql.execute(f"SELECT * from {table1}"):
k = row.k
print(k)
if isinstance(k, OrderedMapSerializedKey):
print(k._index)
print(list(k.items())) # This line throws when run on Scylla, but works fine on Cassandra
```
The test has a nested frozen collection - a map of sets. When run against Cassandra this test works, but when run against Scylla, the `k.items()` call throws. The reason is very subtle: It appears that the Python driver keeps the frozen map keys - each itself is a set - serialized in the way it got them from the server. The items() function appears to do a wasteful back-and-forth conversion - it deserializes each key (resulting in a SortedSet object) and then serializes it again to look it up in the map. Somehow during these conversions, the driver re-sorts the elements, and it fails to look it up in the map. Concretely, the serialized key stored in the map is the wrongly sorted (note \0x08, \t, 0x07 - which means 8, 9, 7):
```
\x00\x00\x00\x03\x00\x00\x00\x04\x00\x00\x00\x08\x00\x00\x00\x04\x00\x00\x00\t\x00\x00\x00\x04\x00\x00\x00\x07
```
but the key the code looks up (and fails) is the sorted one (note \0x07, \x08, \t - meaning 7, 8, 9):
```
\x00\x00\x00\x03\x00\x00\x00\x04\x00\x00\x00\x07\x00\x00\x00\x04\x00\x00\x00\x08\x00\x00\x00\x04\x00\x00\x00\t
```
The test passes if instead of `tuple([8, 9, 7])` we pass `tuple([7, 8, 9])` to the prepared statement!
The test also passes if instead of prepared statement, we use an inline statement. Both commands below work the same:
```
cql.execute("INSERT INTO " + table1 + " (k) VALUES ({{7, 8, 9}: 1})")
cql.execute("INSERT INTO " + table1 + " (k) VALUES ({{8, 9, 7}: 1})")
```
Note that this test appears very contrived, but it is actually a greatly simplified version of a real test I tried to run (translated, perhaps not very well, from a Java test from Cassandra) which failed on Scylla, and this is a simplified example showing why. But I will try to create an even simpler test which only needs a `frozen<set>` without nesting.
|
comp
|
frozen set may be created out of order via a prepared statement in scylla set collections are sorted by the element type s natural order although this is not clearly documented as far as i can tell clients may assume that this the case also for a frozen set for example used as a partition key it appears that scylla can forget to sort the frozen set when inserted using a prepared statement although does sort it as expected when inserted inline in the statement it appears that cassandra correctly sorts the frozen set also for prepared statement here is one subtle and elaborate test case which breaks using a python driver like an ordinary client take a look at this test code python pytest fixture scope session def cql test keyspace table test keyspace unique name cql execute f create table table k frozen int primary key yield table cql execute drop table table def test broken cql insert cql prepare f insert into k values cql execute insert for row in cql execute f select from k row k print k if isinstance k orderedmapserializedkey print k index print list k items this line throws when run on scylla but works fine on cassandra the test has a nested frozen collection a map of sets when run against cassandra this test works but when run against scylla the k items call throws the reason is very subtle it appears that the python driver keeps the frozen map keys each itself is a set serialized in the way it got them from the server the items function appears to do a wasteful back and forth conversion it deserializes each key resulting in a sortedset object and then serializes it again to look it up in the map somehow during these conversions the driver re sorts the elements and it fails to look it up in the map concretely the serialized key stored in the map is the wrongly sorted note t which means t but the key the code looks up and fails is the sorted one note t meaning t the test passes if instead of tuple we pass tuple to the prepared statement the test also passes if instead of prepared statement we use an inline statement both commands below work the same cql execute insert into k values cql execute insert into k values note that this test appears very contrived but it is actually a greatly simplified version of a real test i tried to run translated perhaps not very well from a java test from cassandra which failed on scylla and this is a simplified example showing why but i will try to create an even simpler test which only needs a frozen without nesting
| 1
|
4,183
| 4,958,486,721
|
IssuesEvent
|
2016-12-02 09:56:57
|
w3c/html
|
https://api.github.com/repos/w3c/html
|
closed
|
Get rid of cross domain ressources and requests
|
Needs incubation security
|
For security, privacy and efficiency reasons cross domain requests should be phased out from html.
Reason for this:
Modern webpages include various resources from all over the internet instead of serving the content from a domain local URL. This helps with XSS, CSRF, script injection de-anonymisation, user tracking etc.
For a more secure web this bad habit should be discouraged or better be abolished in the not too distant future.
|
True
|
Get rid of cross domain ressources and requests - For security, privacy and efficiency reasons cross domain requests should be phased out from html.
Reason for this:
Modern webpages include various resources from all over the internet instead of serving the content from a domain local URL. This helps with XSS, CSRF, script injection de-anonymisation, user tracking etc.
For a more secure web this bad habit should be discouraged or better be abolished in the not too distant future.
|
non_comp
|
get rid of cross domain ressources and requests for security privacy and efficiency reasons cross domain requests should be phased out from html reason for this modern webpages include various resources from all over the internet instead of serving the content from a domain local url this helps with xss csrf script injection de anonymisation user tracking etc for a more secure web this bad habit should be discouraged or better be abolished in the not too distant future
| 0
|
191,866
| 22,215,857,933
|
IssuesEvent
|
2022-06-08 01:30:47
|
Nivaskumark/kernel_v4.1.15
|
https://api.github.com/repos/Nivaskumark/kernel_v4.1.15
|
reopened
|
CVE-2022-22764 (High) detected in linuxlinux-4.6
|
security vulnerability
|
## CVE-2022-22764 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.6</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Nivaskumark/kernel_v4.1.15/commit/00db4e8795bcbec692fb60b19160bdd763ad42e3">00db4e8795bcbec692fb60b19160bdd763ad42e3</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Mozilla developers Paul Adenot and the Mozilla Fuzzing Team reported memory safety bugs present in Firefox 96 and Firefox ESR 91.5. Some of these bugs showed evidence of memory corruption and we presume that with enough effort some of these could have been exploited to run arbitrary code.
<p>Publish Date: 2022-01-07
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-22764>CVE-2022-22764</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2022-22764">https://nvd.nist.gov/vuln/detail/CVE-2022-22764</a></p>
<p>Release Date: 2022-01-07</p>
<p>Fix Resolution: linux-libc-headers - 5.14;linux-yocto - 5.4.20+gitAUTOINC+c11911d4d1_f4d7dbafb1,4.8.26+gitAUTOINC+1c60e003c7_27efc3ba68</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2022-22764 (High) detected in linuxlinux-4.6 - ## CVE-2022-22764 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.6</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Nivaskumark/kernel_v4.1.15/commit/00db4e8795bcbec692fb60b19160bdd763ad42e3">00db4e8795bcbec692fb60b19160bdd763ad42e3</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Mozilla developers Paul Adenot and the Mozilla Fuzzing Team reported memory safety bugs present in Firefox 96 and Firefox ESR 91.5. Some of these bugs showed evidence of memory corruption and we presume that with enough effort some of these could have been exploited to run arbitrary code.
<p>Publish Date: 2022-01-07
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-22764>CVE-2022-22764</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2022-22764">https://nvd.nist.gov/vuln/detail/CVE-2022-22764</a></p>
<p>Release Date: 2022-01-07</p>
<p>Fix Resolution: linux-libc-headers - 5.14;linux-yocto - 5.4.20+gitAUTOINC+c11911d4d1_f4d7dbafb1,4.8.26+gitAUTOINC+1c60e003c7_27efc3ba68</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_comp
|
cve high detected in linuxlinux cve high severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch master vulnerable source files vulnerability details mozilla developers paul adenot and the mozilla fuzzing team reported memory safety bugs present in firefox and firefox esr some of these bugs showed evidence of memory corruption and we presume that with enough effort some of these could have been exploited to run arbitrary code publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution linux libc headers linux yocto gitautoinc gitautoinc step up your open source security game with whitesource
| 0
|
237,376
| 26,084,105,293
|
IssuesEvent
|
2022-12-25 21:26:17
|
MValle21/oathkeeper
|
https://api.github.com/repos/MValle21/oathkeeper
|
opened
|
CVE-2022-46175 (High) detected in json5-2.1.3.tgz, json5-1.0.1.tgz
|
security vulnerability
|
## CVE-2022-46175 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>json5-2.1.3.tgz</b>, <b>json5-1.0.1.tgz</b></p></summary>
<p>
<details><summary><b>json5-2.1.3.tgz</b></p></summary>
<p>JSON for humans.</p>
<p>Library home page: <a href="https://registry.npmjs.org/json5/-/json5-2.1.3.tgz">https://registry.npmjs.org/json5/-/json5-2.1.3.tgz</a></p>
<p>Path to dependency file: /docs/package.json</p>
<p>Path to vulnerable library: /docs/node_modules/json5/package.json</p>
<p>
Dependency Hierarchy:
- core-2.0.0-alpha.415a7973f.tgz (Root Library)
- core-7.12.9.tgz
- :x: **json5-2.1.3.tgz** (Vulnerable Library)
</details>
<details><summary><b>json5-1.0.1.tgz</b></p></summary>
<p>JSON for humans.</p>
<p>Library home page: <a href="https://registry.npmjs.org/json5/-/json5-1.0.1.tgz">https://registry.npmjs.org/json5/-/json5-1.0.1.tgz</a></p>
<p>Path to dependency file: /docs/package.json</p>
<p>Path to vulnerable library: /docs/node_modules/react-dev-utils/node_modules/json5/package.json,/docs/node_modules/loader-utils/node_modules/json5/package.json</p>
<p>
Dependency Hierarchy:
- core-2.0.0-alpha.415a7973f.tgz (Root Library)
- react-dev-utils-10.2.1.tgz
- loader-utils-1.2.3.tgz
- :x: **json5-1.0.1.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/MValle21/oathkeeper/commit/43c00a05bdb772edb5194a57f42ee834b37f3774">43c00a05bdb772edb5194a57f42ee834b37f3774</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
JSON5 is an extension to the popular JSON file format that aims to be easier to write and maintain by hand (e.g. for config files). The `parse` method of the JSON5 library before and including version `2.2.1` does not restrict parsing of keys named `__proto__`, allowing specially crafted strings to pollute the prototype of the resulting object. This vulnerability pollutes the prototype of the object returned by `JSON5.parse` and not the global Object prototype, which is the commonly understood definition of Prototype Pollution. However, polluting the prototype of a single object can have significant security impact for an application if the object is later used in trusted operations. This vulnerability could allow an attacker to set arbitrary and unexpected keys on the object returned from `JSON5.parse`. The actual impact will depend on how applications utilize the returned object and how they filter unwanted keys, but could include denial of service, cross-site scripting, elevation of privilege, and in extreme cases, remote code execution. `JSON5.parse` should restrict parsing of `__proto__` keys when parsing JSON strings to objects. As a point of reference, the `JSON.parse` method included in JavaScript ignores `__proto__` keys. Simply changing `JSON5.parse` to `JSON.parse` in the examples above mitigates this vulnerability. This vulnerability is patched in json5 version 2.2.2 and later.
<p>Publish Date: 2022-12-24
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-46175>CVE-2022-46175</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: Low
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.cve.org/CVERecord?id=CVE-2022-46175">https://www.cve.org/CVERecord?id=CVE-2022-46175</a></p>
<p>Release Date: 2022-12-24</p>
<p>Fix Resolution: json5 - 2.2.2</p>
</p>
</details>
<p></p>
|
True
|
CVE-2022-46175 (High) detected in json5-2.1.3.tgz, json5-1.0.1.tgz - ## CVE-2022-46175 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>json5-2.1.3.tgz</b>, <b>json5-1.0.1.tgz</b></p></summary>
<p>
<details><summary><b>json5-2.1.3.tgz</b></p></summary>
<p>JSON for humans.</p>
<p>Library home page: <a href="https://registry.npmjs.org/json5/-/json5-2.1.3.tgz">https://registry.npmjs.org/json5/-/json5-2.1.3.tgz</a></p>
<p>Path to dependency file: /docs/package.json</p>
<p>Path to vulnerable library: /docs/node_modules/json5/package.json</p>
<p>
Dependency Hierarchy:
- core-2.0.0-alpha.415a7973f.tgz (Root Library)
- core-7.12.9.tgz
- :x: **json5-2.1.3.tgz** (Vulnerable Library)
</details>
<details><summary><b>json5-1.0.1.tgz</b></p></summary>
<p>JSON for humans.</p>
<p>Library home page: <a href="https://registry.npmjs.org/json5/-/json5-1.0.1.tgz">https://registry.npmjs.org/json5/-/json5-1.0.1.tgz</a></p>
<p>Path to dependency file: /docs/package.json</p>
<p>Path to vulnerable library: /docs/node_modules/react-dev-utils/node_modules/json5/package.json,/docs/node_modules/loader-utils/node_modules/json5/package.json</p>
<p>
Dependency Hierarchy:
- core-2.0.0-alpha.415a7973f.tgz (Root Library)
- react-dev-utils-10.2.1.tgz
- loader-utils-1.2.3.tgz
- :x: **json5-1.0.1.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/MValle21/oathkeeper/commit/43c00a05bdb772edb5194a57f42ee834b37f3774">43c00a05bdb772edb5194a57f42ee834b37f3774</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
JSON5 is an extension to the popular JSON file format that aims to be easier to write and maintain by hand (e.g. for config files). The `parse` method of the JSON5 library before and including version `2.2.1` does not restrict parsing of keys named `__proto__`, allowing specially crafted strings to pollute the prototype of the resulting object. This vulnerability pollutes the prototype of the object returned by `JSON5.parse` and not the global Object prototype, which is the commonly understood definition of Prototype Pollution. However, polluting the prototype of a single object can have significant security impact for an application if the object is later used in trusted operations. This vulnerability could allow an attacker to set arbitrary and unexpected keys on the object returned from `JSON5.parse`. The actual impact will depend on how applications utilize the returned object and how they filter unwanted keys, but could include denial of service, cross-site scripting, elevation of privilege, and in extreme cases, remote code execution. `JSON5.parse` should restrict parsing of `__proto__` keys when parsing JSON strings to objects. As a point of reference, the `JSON.parse` method included in JavaScript ignores `__proto__` keys. Simply changing `JSON5.parse` to `JSON.parse` in the examples above mitigates this vulnerability. This vulnerability is patched in json5 version 2.2.2 and later.
<p>Publish Date: 2022-12-24
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-46175>CVE-2022-46175</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: Low
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.cve.org/CVERecord?id=CVE-2022-46175">https://www.cve.org/CVERecord?id=CVE-2022-46175</a></p>
<p>Release Date: 2022-12-24</p>
<p>Fix Resolution: json5 - 2.2.2</p>
</p>
</details>
<p></p>
|
non_comp
|
cve high detected in tgz tgz cve high severity vulnerability vulnerable libraries tgz tgz tgz json for humans library home page a href path to dependency file docs package json path to vulnerable library docs node modules package json dependency hierarchy core alpha tgz root library core tgz x tgz vulnerable library tgz json for humans library home page a href path to dependency file docs package json path to vulnerable library docs node modules react dev utils node modules package json docs node modules loader utils node modules package json dependency hierarchy core alpha tgz root library react dev utils tgz loader utils tgz x tgz vulnerable library found in head commit a href found in base branch master vulnerability details is an extension to the popular json file format that aims to be easier to write and maintain by hand e g for config files the parse method of the library before and including version does not restrict parsing of keys named proto allowing specially crafted strings to pollute the prototype of the resulting object this vulnerability pollutes the prototype of the object returned by parse and not the global object prototype which is the commonly understood definition of prototype pollution however polluting the prototype of a single object can have significant security impact for an application if the object is later used in trusted operations this vulnerability could allow an attacker to set arbitrary and unexpected keys on the object returned from parse the actual impact will depend on how applications utilize the returned object and how they filter unwanted keys but could include denial of service cross site scripting elevation of privilege and in extreme cases remote code execution parse should restrict parsing of proto keys when parsing json strings to objects as a point of reference the json parse method included in javascript ignores proto keys simply changing parse to json parse in the examples above mitigates this vulnerability this vulnerability is patched in version and later publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact low availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution
| 0
|
16,481
| 22,304,514,522
|
IssuesEvent
|
2022-06-13 11:50:48
|
hanubeki/dake
|
https://api.github.com/repos/hanubeki/dake
|
closed
|
Added mines not colored as additions
|
incompatibility 5.2
|
No black border, they are still white regardless if they are addition note or not.
|
True
|
Added mines not colored as additions - No black border, they are still white regardless if they are addition note or not.
|
comp
|
added mines not colored as additions no black border they are still white regardless if they are addition note or not
| 1
|
218,029
| 7,330,287,442
|
IssuesEvent
|
2018-03-05 09:25:35
|
geosolutions-it/MapStore2
|
https://api.github.com/repos/geosolutions-it/MapStore2
|
opened
|
Measure tool, initial mispositioning of tooltip while drawing
|
LeafLetJS Priority: Medium bug
|
### Description
When you start drawing something the tooltip starts in the top left corner
### In case of Bug (otherwise remove this paragraph)
*Browser Affected*
(use this site: https://www.whatsmybrowser.org/ for non expert users)
- [ ] Internet Explorer
- [x] Chrome 64.0.3282.186
- [ ] Firefox
- [ ] Safari
*Browser Version Affected*
*Steps to reproduce*
- open new map with leaflet
- open measure tool
- choose any tool
*Expected Result*
The tool selected gets enabled and no tooltip is shown in the map
*Current Result*
The tool selected gets enabled and a tooltip appear in the top left corner

### Other useful information (optional):
|
1.0
|
Measure tool, initial mispositioning of tooltip while drawing - ### Description
When you start drawing something the tooltip starts in the top left corner
### In case of Bug (otherwise remove this paragraph)
*Browser Affected*
(use this site: https://www.whatsmybrowser.org/ for non expert users)
- [ ] Internet Explorer
- [x] Chrome 64.0.3282.186
- [ ] Firefox
- [ ] Safari
*Browser Version Affected*
*Steps to reproduce*
- open new map with leaflet
- open measure tool
- choose any tool
*Expected Result*
The tool selected gets enabled and no tooltip is shown in the map
*Current Result*
The tool selected gets enabled and a tooltip appear in the top left corner

### Other useful information (optional):
|
non_comp
|
measure tool initial mispositioning of tooltip while drawing description when you start drawing something the tooltip starts in the top left corner in case of bug otherwise remove this paragraph browser affected use this site for non expert users internet explorer chrome firefox safari browser version affected steps to reproduce open new map with leaflet open measure tool choose any tool expected result the tool selected gets enabled and no tooltip is shown in the map current result the tool selected gets enabled and a tooltip appear in the top left corner other useful information optional
| 0
|
66,534
| 14,788,917,207
|
IssuesEvent
|
2021-01-12 09:52:23
|
andygonzalez2010/store
|
https://api.github.com/repos/andygonzalez2010/store
|
opened
|
CVE-2019-16303 (High) detected in generator-jhipster-6.0.1.tgz
|
security vulnerability
|
## CVE-2019-16303 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>generator-jhipster-6.0.1.tgz</b></p></summary>
<p>Spring Boot + Angular/React in one handy generator</p>
<p>Library home page: <a href="https://registry.npmjs.org/generator-jhipster/-/generator-jhipster-6.0.1.tgz">https://registry.npmjs.org/generator-jhipster/-/generator-jhipster-6.0.1.tgz</a></p>
<p>Path to dependency file: store/package.json</p>
<p>Path to vulnerable library: store/node_modules/generator-jhipster/package.json</p>
<p>
Dependency Hierarchy:
- :x: **generator-jhipster-6.0.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/andygonzalez2010/store/commit/84990bf9d6b7cdf76851573df077d70766b08e91">84990bf9d6b7cdf76851573df077d70766b08e91</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A class generated by the Generator in JHipster before 6.3.0 and JHipster Kotlin through 1.1.0 produces code that uses an insecure source of randomness (apache.commons.lang3 RandomStringUtils). This allows an attacker (if able to obtain their own password reset URL) to compute the value for all other password resets for other accounts, thus allowing privilege escalation or account takeover.
<p>Publish Date: 2019-09-14
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-16303>CVE-2019-16303</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16303">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16303</a></p>
<p>Release Date: 2019-09-14</p>
<p>Fix Resolution: 6.3.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2019-16303 (High) detected in generator-jhipster-6.0.1.tgz - ## CVE-2019-16303 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>generator-jhipster-6.0.1.tgz</b></p></summary>
<p>Spring Boot + Angular/React in one handy generator</p>
<p>Library home page: <a href="https://registry.npmjs.org/generator-jhipster/-/generator-jhipster-6.0.1.tgz">https://registry.npmjs.org/generator-jhipster/-/generator-jhipster-6.0.1.tgz</a></p>
<p>Path to dependency file: store/package.json</p>
<p>Path to vulnerable library: store/node_modules/generator-jhipster/package.json</p>
<p>
Dependency Hierarchy:
- :x: **generator-jhipster-6.0.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/andygonzalez2010/store/commit/84990bf9d6b7cdf76851573df077d70766b08e91">84990bf9d6b7cdf76851573df077d70766b08e91</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A class generated by the Generator in JHipster before 6.3.0 and JHipster Kotlin through 1.1.0 produces code that uses an insecure source of randomness (apache.commons.lang3 RandomStringUtils). This allows an attacker (if able to obtain their own password reset URL) to compute the value for all other password resets for other accounts, thus allowing privilege escalation or account takeover.
<p>Publish Date: 2019-09-14
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-16303>CVE-2019-16303</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16303">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16303</a></p>
<p>Release Date: 2019-09-14</p>
<p>Fix Resolution: 6.3.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_comp
|
cve high detected in generator jhipster tgz cve high severity vulnerability vulnerable library generator jhipster tgz spring boot angular react in one handy generator library home page a href path to dependency file store package json path to vulnerable library store node modules generator jhipster package json dependency hierarchy x generator jhipster tgz vulnerable library found in head commit a href found in base branch master vulnerability details a class generated by the generator in jhipster before and jhipster kotlin through produces code that uses an insecure source of randomness apache commons randomstringutils this allows an attacker if able to obtain their own password reset url to compute the value for all other password resets for other accounts thus allowing privilege escalation or account takeover publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
44,200
| 23,519,903,309
|
IssuesEvent
|
2022-08-19 04:00:36
|
rtCamp/wp-themes-performance-measurement
|
https://api.github.com/repos/rtCamp/wp-themes-performance-measurement
|
opened
|
Performance Report for gridbox theme
|
performance-report gh-runner
|
## THEME: gridbox
- MEASURING TOOL: Lighthouse CI
- LIGHTHOUSE VERSION: 0.9.0
- TIMESTAMP: 2022-08-19T04:00:35
- HTML REPORT ARTIFACTS: https://github.com/rtCamp/wp-themes-performance-measurement/actions/runs/2886953782
> Note: To see full Performance report download artifacts as zip and unzip them to destination folder `.lighthouseci` and use command `npm run lhci open`.
|
True
|
Performance Report for gridbox theme - ## THEME: gridbox
- MEASURING TOOL: Lighthouse CI
- LIGHTHOUSE VERSION: 0.9.0
- TIMESTAMP: 2022-08-19T04:00:35
- HTML REPORT ARTIFACTS: https://github.com/rtCamp/wp-themes-performance-measurement/actions/runs/2886953782
> Note: To see full Performance report download artifacts as zip and unzip them to destination folder `.lighthouseci` and use command `npm run lhci open`.
|
non_comp
|
performance report for gridbox theme theme gridbox measuring tool lighthouse ci lighthouse version timestamp html report artifacts note to see full performance report download artifacts as zip and unzip them to destination folder lighthouseci and use command npm run lhci open
| 0
|
543
| 2,978,551,546
|
IssuesEvent
|
2015-07-16 07:28:21
|
facebook/hhvm
|
https://api.github.com/repos/facebook/hhvm
|
closed
|
mysqli_stmt_bind_result only returning last row from query
|
php5 incompatibility
|
```
if($stmt->num_rows > 0){
//id,body,title,tags,owner,create_date,search_weight,type
$stmt->bind_result($result['id'],$result['body'],$result['title'],$result['tags'],$result['owner'],$result['create_date'],$result['search_weight'],$result['type'],$result['city'],$result['country'],$result['state']);
while($stmt->fetch()){
$data[] = $result;
}
$stmt->close();
}
```
|
True
|
mysqli_stmt_bind_result only returning last row from query - ```
if($stmt->num_rows > 0){
//id,body,title,tags,owner,create_date,search_weight,type
$stmt->bind_result($result['id'],$result['body'],$result['title'],$result['tags'],$result['owner'],$result['create_date'],$result['search_weight'],$result['type'],$result['city'],$result['country'],$result['state']);
while($stmt->fetch()){
$data[] = $result;
}
$stmt->close();
}
```
|
comp
|
mysqli stmt bind result only returning last row from query if stmt num rows id body title tags owner create date search weight type stmt bind result result result result result result result result result result result result while stmt fetch data result stmt close
| 1
|
732,184
| 25,247,976,019
|
IssuesEvent
|
2022-11-15 12:37:46
|
vaticle/typedb-studio
|
https://api.github.com/repos/vaticle/typedb-studio
|
closed
|
Shift+End selects to end of file, not end of line
|
type: bug priority: medium domain: text-editor
|
## Description
When doing Shift+End in a tab with multiple lines of queries/text, it will select up until the end of the file instead of up until the end of the line.
Same is true when doing the opposite with Shift+Home.
This doesn't match the standard behavior seen in most/all text editors.
I'm wondering if that's a WIndows only thing.
## Environment
1. TypeDB version: 2.10
2. OS of TypeDB server: Windows 11
3. Studio version: 2.10-alpha-4
4. OS of Studio: Windows 11
5. Other environment details:
## Reproducible Steps
Steps to create the smallest reproducible scenario:
- Enter two lines of text in the editor
- Place your cursor at the beginning of the first line
- Hit Shift-End
## Expected Output
Select text up until the end of the current line.
## Actual Output
All text is selected.
## Additional information
|
1.0
|
Shift+End selects to end of file, not end of line - ## Description
When doing Shift+End in a tab with multiple lines of queries/text, it will select up until the end of the file instead of up until the end of the line.
Same is true when doing the opposite with Shift+Home.
This doesn't match the standard behavior seen in most/all text editors.
I'm wondering if that's a WIndows only thing.
## Environment
1. TypeDB version: 2.10
2. OS of TypeDB server: Windows 11
3. Studio version: 2.10-alpha-4
4. OS of Studio: Windows 11
5. Other environment details:
## Reproducible Steps
Steps to create the smallest reproducible scenario:
- Enter two lines of text in the editor
- Place your cursor at the beginning of the first line
- Hit Shift-End
## Expected Output
Select text up until the end of the current line.
## Actual Output
All text is selected.
## Additional information
|
non_comp
|
shift end selects to end of file not end of line description when doing shift end in a tab with multiple lines of queries text it will select up until the end of the file instead of up until the end of the line same is true when doing the opposite with shift home this doesn t match the standard behavior seen in most all text editors i m wondering if that s a windows only thing environment typedb version os of typedb server windows studio version alpha os of studio windows other environment details reproducible steps steps to create the smallest reproducible scenario enter two lines of text in the editor place your cursor at the beginning of the first line hit shift end expected output select text up until the end of the current line actual output all text is selected additional information
| 0
|
708,680
| 24,349,794,753
|
IssuesEvent
|
2022-10-02 20:01:04
|
Roguelike-Celebration/azure-mud
|
https://api.github.com/repos/Roguelike-Celebration/azure-mud
|
closed
|
"Make speaker" endpoint should actually work
|
do for 2022 high priority
|
It hits a 404 not found despite theoretically existing
|
1.0
|
"Make speaker" endpoint should actually work - It hits a 404 not found despite theoretically existing
|
non_comp
|
make speaker endpoint should actually work it hits a not found despite theoretically existing
| 0
|
16,223
| 2,612,762,569
|
IssuesEvent
|
2015-02-27 16:33:19
|
crowell/modpagespeed
|
https://api.github.com/repos/crowell/modpagespeed
|
closed
|
CSS Filter: Incorrectly minification not-quoted font-family names with spaces
|
bug imported Priority-High
|
_From [cgeorgak...@gmail.com](https://code.google.com/u/100124221215842728449/) on November 03, 2010 18:24:07_
this style:
font-family: trebuchet ms;
is being minified to:
font-family: trebuchetms;
while it should be:
font-family: "trebuchet ms";
or remain untouched
_Original issue: http://code.google.com/p/modpagespeed/issues/detail?id=5_
|
1.0
|
CSS Filter: Incorrectly minification not-quoted font-family names with spaces - _From [cgeorgak...@gmail.com](https://code.google.com/u/100124221215842728449/) on November 03, 2010 18:24:07_
this style:
font-family: trebuchet ms;
is being minified to:
font-family: trebuchetms;
while it should be:
font-family: "trebuchet ms";
or remain untouched
_Original issue: http://code.google.com/p/modpagespeed/issues/detail?id=5_
|
non_comp
|
css filter incorrectly minification not quoted font family names with spaces from on november this style font family trebuchet ms is being minified to font family trebuchetms while it should be font family trebuchet ms or remain untouched original issue
| 0
|
16,779
| 23,138,857,101
|
IssuesEvent
|
2022-07-28 16:27:38
|
NEZNAMY/TAB
|
https://api.github.com/repos/NEZNAMY/TAB
|
closed
|
Magma DO NOT work
|
Wontfix Compatibility Error
|
### Server version
Magma 1.18.2
### TAB version
3.1.2
### Stack trace
[22:11:54] [Server thread/INFO] [/]: [TAB] Server version: 1.18.2 (v1_18_R2)
[22:11:54] [Server thread/INFO] [/]: [TAB] Your server version is marked as compatible, but a compatibility issue was found. Please report the error below (include your server version & fork too)
[22:11:54] [Server thread/ERROR] [me.ne.ta.pl.bu.Main/]: [TAB]
java.lang.ClassNotFoundException: No class found with possible names [net.minecraft.EnumChatFormat, EnumChatFormat]
at me.neznamy.tab.platforms.bukkit.nms.NMSStorage.getNMSClass(NMSStorage.java:456) ~[?:?] {}
at me.neznamy.tab.platforms.bukkit.nms.NMSStorage.<init>(NMSStorage.java:38) ~[?:?] {}
at me.neznamy.tab.platforms.bukkit.Main.isVersionSupported(Main.java:78) ~[?:?] {}
at me.neznamy.tab.platforms.bukkit.Main.onEnable(Main.java:32) ~[?:?] {}
at org.bukkit.plugin.java.JavaPlugin.setEnabled(JavaPlugin.java:264) ~[forge-1.18.2-40.1.54-universal.jar%2365!/:?] {re:classloading}
at org.bukkit.plugin.java.JavaPluginLoader.enablePlugin(JavaPluginLoader.java:342) ~[forge-1.18.2-40.1.54-universal.jar%2365!/:?] {re:classloading}
at org.bukkit.plugin.SimplePluginManager.enablePlugin(SimplePluginManager.java:480) ~[forge-1.18.2-40.1.54-universal.jar%2365!/:?] {re:classloading}
at org.bukkit.craftbukkit.v1_18_R2.CraftServer.enablePlugin(CraftServer.java:434) ~[forge-1.18.2-40.1.54-universal.jar%2365!/:?] {re:classloading}
at org.bukkit.craftbukkit.v1_18_R2.CraftServer.enablePlugins(CraftServer.java:348) ~[forge-1.18.2-40.1.54-universal.jar%2365!/:?] {re:classloading}
at net.minecraft.server.MinecraftServer.m_129815_(MinecraftServer.java:448) ~[server-1.18.2-20220404.173914-srg.jar%2360!/:?] {re:classloading,pl:accesstransformer:B}
at net.minecraft.server.MinecraftServer.loadLevel(MinecraftServer.java:357) ~[server-1.18.2-20220404.173914-srg.jar%2360!/:?] {re:classloading,pl:accesstransformer:B}
at net.minecraft.server.dedicated.DedicatedServer.m_7038_(DedicatedServer.java:244) ~[server-1.18.2-20220404.173914-srg.jar%2360!/:?] {re:classloading,pl:accesstransformer:B}
at net.minecraft.server.MinecraftServer.m_130011_(MinecraftServer.java:757) ~[server-1.18.2-20220404.173914-srg.jar%2360!/:?] {re:classloading,pl:accesstransformer:B}
at net.minecraft.server.MinecraftServer.m_177918_(MinecraftServer.java:253) ~[server-1.18.2-20220404.173914-srg.jar%2360!/:?] {re:classloading,pl:accesstransformer:B}
at java.lang.Thread.run(Thread.java:833) [?:?] {}
[22:11:54] [Server thread/INFO] [me.ne.ta.pl.bu.Main/]: [TAB] Disabling TAB v3.1.2
### Steps to reproduce (if known)
Just launch server
### Additional info
Using Magma Hybrid Core. When starting server (1.18.2) it just enables and disables plugin.
I tried some past versions that support 1.18.2, they have same problem
### Checklist
- [X] I am running latest version of the plugin
- [X] I have included a paste of the error
- [X] I have read the Compatibility wiki page and am not trying to run the plugin on an unsupported server version / platform
|
True
|
Magma DO NOT work - ### Server version
Magma 1.18.2
### TAB version
3.1.2
### Stack trace
[22:11:54] [Server thread/INFO] [/]: [TAB] Server version: 1.18.2 (v1_18_R2)
[22:11:54] [Server thread/INFO] [/]: [TAB] Your server version is marked as compatible, but a compatibility issue was found. Please report the error below (include your server version & fork too)
[22:11:54] [Server thread/ERROR] [me.ne.ta.pl.bu.Main/]: [TAB]
java.lang.ClassNotFoundException: No class found with possible names [net.minecraft.EnumChatFormat, EnumChatFormat]
at me.neznamy.tab.platforms.bukkit.nms.NMSStorage.getNMSClass(NMSStorage.java:456) ~[?:?] {}
at me.neznamy.tab.platforms.bukkit.nms.NMSStorage.<init>(NMSStorage.java:38) ~[?:?] {}
at me.neznamy.tab.platforms.bukkit.Main.isVersionSupported(Main.java:78) ~[?:?] {}
at me.neznamy.tab.platforms.bukkit.Main.onEnable(Main.java:32) ~[?:?] {}
at org.bukkit.plugin.java.JavaPlugin.setEnabled(JavaPlugin.java:264) ~[forge-1.18.2-40.1.54-universal.jar%2365!/:?] {re:classloading}
at org.bukkit.plugin.java.JavaPluginLoader.enablePlugin(JavaPluginLoader.java:342) ~[forge-1.18.2-40.1.54-universal.jar%2365!/:?] {re:classloading}
at org.bukkit.plugin.SimplePluginManager.enablePlugin(SimplePluginManager.java:480) ~[forge-1.18.2-40.1.54-universal.jar%2365!/:?] {re:classloading}
at org.bukkit.craftbukkit.v1_18_R2.CraftServer.enablePlugin(CraftServer.java:434) ~[forge-1.18.2-40.1.54-universal.jar%2365!/:?] {re:classloading}
at org.bukkit.craftbukkit.v1_18_R2.CraftServer.enablePlugins(CraftServer.java:348) ~[forge-1.18.2-40.1.54-universal.jar%2365!/:?] {re:classloading}
at net.minecraft.server.MinecraftServer.m_129815_(MinecraftServer.java:448) ~[server-1.18.2-20220404.173914-srg.jar%2360!/:?] {re:classloading,pl:accesstransformer:B}
at net.minecraft.server.MinecraftServer.loadLevel(MinecraftServer.java:357) ~[server-1.18.2-20220404.173914-srg.jar%2360!/:?] {re:classloading,pl:accesstransformer:B}
at net.minecraft.server.dedicated.DedicatedServer.m_7038_(DedicatedServer.java:244) ~[server-1.18.2-20220404.173914-srg.jar%2360!/:?] {re:classloading,pl:accesstransformer:B}
at net.minecraft.server.MinecraftServer.m_130011_(MinecraftServer.java:757) ~[server-1.18.2-20220404.173914-srg.jar%2360!/:?] {re:classloading,pl:accesstransformer:B}
at net.minecraft.server.MinecraftServer.m_177918_(MinecraftServer.java:253) ~[server-1.18.2-20220404.173914-srg.jar%2360!/:?] {re:classloading,pl:accesstransformer:B}
at java.lang.Thread.run(Thread.java:833) [?:?] {}
[22:11:54] [Server thread/INFO] [me.ne.ta.pl.bu.Main/]: [TAB] Disabling TAB v3.1.2
### Steps to reproduce (if known)
Just launch server
### Additional info
Using Magma Hybrid Core. When starting server (1.18.2) it just enables and disables plugin.
I tried some past versions that support 1.18.2, they have same problem
### Checklist
- [X] I am running latest version of the plugin
- [X] I have included a paste of the error
- [X] I have read the Compatibility wiki page and am not trying to run the plugin on an unsupported server version / platform
|
comp
|
magma do not work server version magma tab version stack trace server version your server version is marked as compatible but a compatibility issue was found please report the error below include your server version fork too java lang classnotfoundexception no class found with possible names at me neznamy tab platforms bukkit nms nmsstorage getnmsclass nmsstorage java at me neznamy tab platforms bukkit nms nmsstorage nmsstorage java at me neznamy tab platforms bukkit main isversionsupported main java at me neznamy tab platforms bukkit main onenable main java at org bukkit plugin java javaplugin setenabled javaplugin java re classloading at org bukkit plugin java javapluginloader enableplugin javapluginloader java re classloading at org bukkit plugin simplepluginmanager enableplugin simplepluginmanager java re classloading at org bukkit craftbukkit craftserver enableplugin craftserver java re classloading at org bukkit craftbukkit craftserver enableplugins craftserver java re classloading at net minecraft server minecraftserver m minecraftserver java re classloading pl accesstransformer b at net minecraft server minecraftserver loadlevel minecraftserver java re classloading pl accesstransformer b at net minecraft server dedicated dedicatedserver m dedicatedserver java re classloading pl accesstransformer b at net minecraft server minecraftserver m minecraftserver java re classloading pl accesstransformer b at net minecraft server minecraftserver m minecraftserver java re classloading pl accesstransformer b at java lang thread run thread java disabling tab steps to reproduce if known just launch server additional info using magma hybrid core when starting server it just enables and disables plugin i tried some past versions that support they have same problem checklist i am running latest version of the plugin i have included a paste of the error i have read the compatibility wiki page and am not trying to run the plugin on an unsupported server version platform
| 1
|
2,165
| 4,927,477,188
|
IssuesEvent
|
2016-11-26 19:23:36
|
csf-dev/CSF.Core
|
https://api.github.com/repos/csf-dev/CSF.Core
|
closed
|
In new repo: Add InMemoryQuery
|
breaks-compatibility enhancement
|
Firstly, break `CSF.Data` away to a new repository. Secondly add a new type named `InMemoryQuery`. This will implement `IQuery` backed with an in-memory collection.
This will be an `ICollection` of `Tuple<Type,object,object>` where:
* The `Type` is the item's original type exposed via `GetType` when it was first added.
* The first object is the item's value
* There second object is the item's key value
It will have a method named `Add` which takes two object parameters (an item and it's key value) but returns itself so that calls may be chained.
It will also have a method named `AddMany` which takes a collection of typed objects and a lambda (selecting an instance of those objects) to get the key value. This also returns itself in order to chain calls.
The get/theorise calls will use the passed type in order to filter the big collection by types (anything which is assignable from the desired type), and then will look for a key value match.
The query method will just filter on type (anything assignable) and then apply the query predicate to the remaining objects.
|
True
|
In new repo: Add InMemoryQuery - Firstly, break `CSF.Data` away to a new repository. Secondly add a new type named `InMemoryQuery`. This will implement `IQuery` backed with an in-memory collection.
This will be an `ICollection` of `Tuple<Type,object,object>` where:
* The `Type` is the item's original type exposed via `GetType` when it was first added.
* The first object is the item's value
* There second object is the item's key value
It will have a method named `Add` which takes two object parameters (an item and it's key value) but returns itself so that calls may be chained.
It will also have a method named `AddMany` which takes a collection of typed objects and a lambda (selecting an instance of those objects) to get the key value. This also returns itself in order to chain calls.
The get/theorise calls will use the passed type in order to filter the big collection by types (anything which is assignable from the desired type), and then will look for a key value match.
The query method will just filter on type (anything assignable) and then apply the query predicate to the remaining objects.
|
comp
|
in new repo add inmemoryquery firstly break csf data away to a new repository secondly add a new type named inmemoryquery this will implement iquery backed with an in memory collection this will be an icollection of tuple where the type is the item s original type exposed via gettype when it was first added the first object is the item s value there second object is the item s key value it will have a method named add which takes two object parameters an item and it s key value but returns itself so that calls may be chained it will also have a method named addmany which takes a collection of typed objects and a lambda selecting an instance of those objects to get the key value this also returns itself in order to chain calls the get theorise calls will use the passed type in order to filter the big collection by types anything which is assignable from the desired type and then will look for a key value match the query method will just filter on type anything assignable and then apply the query predicate to the remaining objects
| 1
|
15,949
| 20,995,188,782
|
IssuesEvent
|
2022-03-29 12:57:51
|
spring-projects-experimental/spring-native
|
https://api.github.com/repos/spring-projects-experimental/spring-native
|
closed
|
commandlinerunner-log4j2 fails with GraalVM 22.1
|
type: compatibility
|
```
openjdk version "17.0.3" 2022-04-19
OpenJDK Runtime Environment GraalVM CE 22.1.0-dev (build 17.0.3+4-jvmci-22.1-b03)
OpenJDK 64-Bit Server VM GraalVM CE 22.1.0-dev (build 17.0.3+4-jvmci-22.1-b03, mixed mode, sharing)
```
```
========================================================================================================================
GraalVM Native Image: Generating 'commandlinerunner-log4j2' (executable)...
========================================================================================================================
[1/7] Initializing... (0,0s @ 0,07GB)
Error: ImageSingletons do not contain key com.oracle.svm.hosted.LinkAtBuildTimeSupport
Error: Use -H:+ReportExceptionStackTraces to print stacktrace of underlying exception
------------------------------------------------------------------------------------------------------------------------
0,1s (6,4% of total time) in 8 GCs | Peak RSS: 0,59GB | CPU load: 5,14
========================================================================================================================
Failed generating 'commandlinerunner-log4j2' after 1,4s.
```
Looks like https://github.com/spring-projects-experimental/spring-native/issues/1546
|
True
|
commandlinerunner-log4j2 fails with GraalVM 22.1 - ```
openjdk version "17.0.3" 2022-04-19
OpenJDK Runtime Environment GraalVM CE 22.1.0-dev (build 17.0.3+4-jvmci-22.1-b03)
OpenJDK 64-Bit Server VM GraalVM CE 22.1.0-dev (build 17.0.3+4-jvmci-22.1-b03, mixed mode, sharing)
```
```
========================================================================================================================
GraalVM Native Image: Generating 'commandlinerunner-log4j2' (executable)...
========================================================================================================================
[1/7] Initializing... (0,0s @ 0,07GB)
Error: ImageSingletons do not contain key com.oracle.svm.hosted.LinkAtBuildTimeSupport
Error: Use -H:+ReportExceptionStackTraces to print stacktrace of underlying exception
------------------------------------------------------------------------------------------------------------------------
0,1s (6,4% of total time) in 8 GCs | Peak RSS: 0,59GB | CPU load: 5,14
========================================================================================================================
Failed generating 'commandlinerunner-log4j2' after 1,4s.
```
Looks like https://github.com/spring-projects-experimental/spring-native/issues/1546
|
comp
|
commandlinerunner fails with graalvm openjdk version openjdk runtime environment graalvm ce dev build jvmci openjdk bit server vm graalvm ce dev build jvmci mixed mode sharing graalvm native image generating commandlinerunner executable initializing error imagesingletons do not contain key com oracle svm hosted linkatbuildtimesupport error use h reportexceptionstacktraces to print stacktrace of underlying exception of total time in gcs peak rss cpu load failed generating commandlinerunner after looks like
| 1
|
226,210
| 24,946,724,346
|
IssuesEvent
|
2022-11-01 01:21:26
|
Nivaskumark/G3_kernel_4.19.72
|
https://api.github.com/repos/Nivaskumark/G3_kernel_4.19.72
|
closed
|
CVE-2021-45480 (Medium) detected in linuxlinux-4.19.257 - autoclosed
|
security vulnerability
|
## CVE-2021-45480 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.257</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Nivaskumark/G3_kernel_4.19.72/commit/ffd4e521bae27768fd4a5f0b6f78bc7799e0feec">ffd4e521bae27768fd4a5f0b6f78bc7799e0feec</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/rds/connection.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/rds/connection.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in the Linux kernel before 5.15.11. There is a memory leak in the __rds_conn_create() function in net/rds/connection.c in a certain combination of circumstances.
<p>Publish Date: 2021-12-24
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-45480>CVE-2021-45480</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-45480">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-45480</a></p>
<p>Release Date: 2021-12-24</p>
<p>Fix Resolution: v5.15.11</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-45480 (Medium) detected in linuxlinux-4.19.257 - autoclosed - ## CVE-2021-45480 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.257</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Nivaskumark/G3_kernel_4.19.72/commit/ffd4e521bae27768fd4a5f0b6f78bc7799e0feec">ffd4e521bae27768fd4a5f0b6f78bc7799e0feec</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/rds/connection.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/rds/connection.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in the Linux kernel before 5.15.11. There is a memory leak in the __rds_conn_create() function in net/rds/connection.c in a certain combination of circumstances.
<p>Publish Date: 2021-12-24
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-45480>CVE-2021-45480</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-45480">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-45480</a></p>
<p>Release Date: 2021-12-24</p>
<p>Fix Resolution: v5.15.11</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_comp
|
cve medium detected in linuxlinux autoclosed cve medium severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch master vulnerable source files net rds connection c net rds connection c vulnerability details an issue was discovered in the linux kernel before there is a memory leak in the rds conn create function in net rds connection c in a certain combination of circumstances publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
16,078
| 21,443,649,672
|
IssuesEvent
|
2022-04-25 02:16:51
|
nervosnetwork/godwoken-web3
|
https://api.github.com/repos/nervosnetwork/godwoken-web3
|
opened
|
`instant finality` transactions not in filter changes
|
Not compatible
|
When I send a transaction to web3, transaction's receipt will available soon because of instant finality.
But when I `eth_getFilterChanges` now, this transaction will not included, it's not submitted to a block.
|
True
|
`instant finality` transactions not in filter changes - When I send a transaction to web3, transaction's receipt will available soon because of instant finality.
But when I `eth_getFilterChanges` now, this transaction will not included, it's not submitted to a block.
|
comp
|
instant finality transactions not in filter changes when i send a transaction to transaction s receipt will available soon because of instant finality but when i eth getfilterchanges now this transaction will not included it s not submitted to a block
| 1
|
18,951
| 26,345,150,638
|
IssuesEvent
|
2023-01-10 21:18:20
|
OpenMDAO/OpenMDAO
|
https://api.github.com/repos/OpenMDAO/OpenMDAO
|
opened
|
Deprecation removal: `assert_rel_error` → `assert_near_equal`
|
backwards_incompatible
|
### Desired capability or behavior.
Removed the deprecated `assert_rel_error` in favor of `assert_near_equal` as of 3.25.0.
### Associated POEM
_No response_
|
True
|
Deprecation removal: `assert_rel_error` → `assert_near_equal` - ### Desired capability or behavior.
Removed the deprecated `assert_rel_error` in favor of `assert_near_equal` as of 3.25.0.
### Associated POEM
_No response_
|
comp
|
deprecation removal assert rel error → assert near equal desired capability or behavior removed the deprecated assert rel error in favor of assert near equal as of associated poem no response
| 1
|
19,001
| 26,425,352,131
|
IssuesEvent
|
2023-01-14 04:38:47
|
zer0Kerbal/KeridianDynamics
|
https://api.github.com/repos/zer0Kerbal/KeridianDynamics
|
closed
|
[compatibility] TETRIX tech tree
|
issue: compatibility/patch contributions-welcome
|
[compatibility] TETRIX tech tree
In the TETRIX TechTree the
* [ ] OASIS would be a Tier 7 part
* [ ] , same as the MobileVAB
* [ ] CAMPER would be a Tier 5 part
* [ ] The Hitchhiker is a Tier 3 part.
|
True
|
[compatibility] TETRIX tech tree - [compatibility] TETRIX tech tree
In the TETRIX TechTree the
* [ ] OASIS would be a Tier 7 part
* [ ] , same as the MobileVAB
* [ ] CAMPER would be a Tier 5 part
* [ ] The Hitchhiker is a Tier 3 part.
|
comp
|
tetrix tech tree tetrix tech tree in the tetrix techtree the oasis would be a tier part same as the mobilevab camper would be a tier part the hitchhiker is a tier part
| 1
|
207,855
| 15,839,051,132
|
IssuesEvent
|
2021-04-06 23:54:09
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
opened
|
roachtest: activerecord failed
|
C-test-failure O-roachtest O-robot branch-63154 release-blocker
|
[(roachtest).activerecord failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2853861&tab=buildLog) on [63154@15caa1367f0ecbff409d52bf4156a5086a2a3fbd](https://github.com/cockroachdb/cockroach/commits/15caa1367f0ecbff409d52bf4156a5086a2a3fbd):
```
The test failed on branch=63154, cloud=gce:
test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/activerecord/run_1
orm_helpers.go:228,orm_helpers.go:154,activerecord.go:227,activerecord.go:239,test_runner.go:768:
Tests run on Cockroach v21.1.0-alpha.3-2101-g15caa1367f
Tests run against activerecord 6.1
6587 Total Tests Run
6585 tests passed
2 tests failed
17 tests skipped
0 tests ignored
0 tests passed unexpectedly
1 test failed unexpectedly
0 tests expected failed but skipped
0 tests expected failed but not run
---
--- PASS: SerializedAttributeTest#test_unexpected_serialized_type (expected)
--- FAIL: InsertAllTest#test_upsert_all_works_with_partitioned_indexes (unexpected)
For a full summary look at the activerecord artifacts
An updated blocklist (activeRecordBlockList21_1) is available in the artifacts' activerecord log
```
<details><summary>More</summary><p>
Artifacts: [/activerecord](https://teamcity.cockroachdb.com/viewLog.html?buildId=2853861&tab=artifacts#/activerecord)
Related:
- #61970 roachtest: activerecord failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-release-20.2](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-release-20.2)
- #61935 roachtest: activerecord failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-release-21.1](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-release-21.1)
- #61931 roachtest: activerecord failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-master](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-master)
- #53730 roachtest: activerecord failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-release-20.1](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-release-20.1)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Aactiverecord.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
|
2.0
|
roachtest: activerecord failed - [(roachtest).activerecord failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2853861&tab=buildLog) on [63154@15caa1367f0ecbff409d52bf4156a5086a2a3fbd](https://github.com/cockroachdb/cockroach/commits/15caa1367f0ecbff409d52bf4156a5086a2a3fbd):
```
The test failed on branch=63154, cloud=gce:
test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/activerecord/run_1
orm_helpers.go:228,orm_helpers.go:154,activerecord.go:227,activerecord.go:239,test_runner.go:768:
Tests run on Cockroach v21.1.0-alpha.3-2101-g15caa1367f
Tests run against activerecord 6.1
6587 Total Tests Run
6585 tests passed
2 tests failed
17 tests skipped
0 tests ignored
0 tests passed unexpectedly
1 test failed unexpectedly
0 tests expected failed but skipped
0 tests expected failed but not run
---
--- PASS: SerializedAttributeTest#test_unexpected_serialized_type (expected)
--- FAIL: InsertAllTest#test_upsert_all_works_with_partitioned_indexes (unexpected)
For a full summary look at the activerecord artifacts
An updated blocklist (activeRecordBlockList21_1) is available in the artifacts' activerecord log
```
<details><summary>More</summary><p>
Artifacts: [/activerecord](https://teamcity.cockroachdb.com/viewLog.html?buildId=2853861&tab=artifacts#/activerecord)
Related:
- #61970 roachtest: activerecord failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-release-20.2](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-release-20.2)
- #61935 roachtest: activerecord failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-release-21.1](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-release-21.1)
- #61931 roachtest: activerecord failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-master](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-master)
- #53730 roachtest: activerecord failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-release-20.1](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-release-20.1)
[See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Aactiverecord.%2A&sort=title&restgroup=false&display=lastcommented+project)
<sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
|
non_comp
|
roachtest activerecord failed on the test failed on branch cloud gce test artifacts and logs in home agent work go src github com cockroachdb cockroach artifacts activerecord run orm helpers go orm helpers go activerecord go activerecord go test runner go tests run on cockroach alpha tests run against activerecord total tests run tests passed tests failed tests skipped tests ignored tests passed unexpectedly test failed unexpectedly tests expected failed but skipped tests expected failed but not run pass serializedattributetest test unexpected serialized type expected fail insertalltest test upsert all works with partitioned indexes unexpected for a full summary look at the activerecord artifacts an updated blocklist is available in the artifacts activerecord log more artifacts related roachtest activerecord failed roachtest activerecord failed roachtest activerecord failed roachtest activerecord failed powered by
| 0
|
2,093
| 4,820,325,697
|
IssuesEvent
|
2016-11-04 22:22:52
|
Polymer/polymer
|
https://api.github.com/repos/Polymer/polymer
|
closed
|
Touch devices: No mouse events on elements for 2.5s when overlaid by elements using gestures
|
1.x 1.x-2.x compatibility bug p1
|
### Description
On touch devices, when a `<button>` is covered by a Polymer element that uses gestures (e.g. one with a `down` listener) and the Polymer element is moved, the `<button>` does not receive mouse events for 2.5s since `POINTERSTATE.mouse.target` is not cleared.
#### Live Demo
http://jsbin.com/runayep/edit?html,output
#### Steps to Reproduce
1. On a touch device (or mobile emulation in Chrome), tap the button to open the drawer
2. Tap the button to close the drawer
3. Immediately, tap the button to open the drawer again
#### Expected Results
The drawer opens
#### Actual Results
The drawer does not open, but clicking the button after 2.5s does.
### Browsers Affected
All
### Versions
- Polymer: v1.7.0
|
True
|
Touch devices: No mouse events on elements for 2.5s when overlaid by elements using gestures - ### Description
On touch devices, when a `<button>` is covered by a Polymer element that uses gestures (e.g. one with a `down` listener) and the Polymer element is moved, the `<button>` does not receive mouse events for 2.5s since `POINTERSTATE.mouse.target` is not cleared.
#### Live Demo
http://jsbin.com/runayep/edit?html,output
#### Steps to Reproduce
1. On a touch device (or mobile emulation in Chrome), tap the button to open the drawer
2. Tap the button to close the drawer
3. Immediately, tap the button to open the drawer again
#### Expected Results
The drawer opens
#### Actual Results
The drawer does not open, but clicking the button after 2.5s does.
### Browsers Affected
All
### Versions
- Polymer: v1.7.0
|
comp
|
touch devices no mouse events on elements for when overlaid by elements using gestures description on touch devices when a is covered by a polymer element that uses gestures e g one with a down listener and the polymer element is moved the does not receive mouse events for since pointerstate mouse target is not cleared live demo steps to reproduce on a touch device or mobile emulation in chrome tap the button to open the drawer tap the button to close the drawer immediately tap the button to open the drawer again expected results the drawer opens actual results the drawer does not open but clicking the button after does browsers affected all versions polymer
| 1
|
177,887
| 13,751,825,613
|
IssuesEvent
|
2020-10-06 13:49:08
|
ayumi-cloud/oc2-security-module
|
https://api.github.com/repos/ayumi-cloud/oc2-security-module
|
closed
|
Add negative labels to bad search engines with dns confirmation
|
Add to Blacklist Enhancement FINSIHED Firewall Priority: Medium Testing - Passed
|
### Enhancement idea
- [x] Add negative labels to bad search engines with dns confirmation.
|
1.0
|
Add negative labels to bad search engines with dns confirmation - ### Enhancement idea
- [x] Add negative labels to bad search engines with dns confirmation.
|
non_comp
|
add negative labels to bad search engines with dns confirmation enhancement idea add negative labels to bad search engines with dns confirmation
| 0
|
20,137
| 28,136,248,944
|
IssuesEvent
|
2023-04-01 12:18:01
|
MarketSquare/robotframework-robocop
|
https://api.github.com/repos/MarketSquare/robotframework-robocop
|
closed
|
Add rules for BREAK, CONTINUE
|
rule RF compatibility
|
Handle BREAK and CONTINUE. Possible rules (update old or create new):
1. Do not name keyword after reserved words (#570)
2. Do not use BREAK, CONTINUE outside FOR, WHILE or TRY EXCEPT (check how it works - possibly prohibit in finally) (#562)
3. Recommend to use them inside IF statement (otherwise the code after them will be dead)
4. Label `Exit For Loop`, `Exit For Loop If`, `Continue For Loop`, `Continue For Loop If` as deprecated and suggest using BREAK or CONTINUE. (#576)
|
True
|
Add rules for BREAK, CONTINUE - Handle BREAK and CONTINUE. Possible rules (update old or create new):
1. Do not name keyword after reserved words (#570)
2. Do not use BREAK, CONTINUE outside FOR, WHILE or TRY EXCEPT (check how it works - possibly prohibit in finally) (#562)
3. Recommend to use them inside IF statement (otherwise the code after them will be dead)
4. Label `Exit For Loop`, `Exit For Loop If`, `Continue For Loop`, `Continue For Loop If` as deprecated and suggest using BREAK or CONTINUE. (#576)
|
comp
|
add rules for break continue handle break and continue possible rules update old or create new do not name keyword after reserved words do not use break continue outside for while or try except check how it works possibly prohibit in finally recommend to use them inside if statement otherwise the code after them will be dead label exit for loop exit for loop if continue for loop continue for loop if as deprecated and suggest using break or continue
| 1
|
6,486
| 8,774,214,730
|
IssuesEvent
|
2018-12-18 19:08:25
|
cleverness/widget-css-classes
|
https://api.github.com/repos/cleverness/widget-css-classes
|
closed
|
Not working in Storefront Mega Menu widgets
|
compatibility
|
The plugin is not working when I add widgets in Storefront Mega Menu but works with the widgets in the sidebar.
|
True
|
Not working in Storefront Mega Menu widgets - The plugin is not working when I add widgets in Storefront Mega Menu but works with the widgets in the sidebar.
|
comp
|
not working in storefront mega menu widgets the plugin is not working when i add widgets in storefront mega menu but works with the widgets in the sidebar
| 1
|
340
| 2,774,453,750
|
IssuesEvent
|
2015-05-04 09:02:55
|
mapbox/polyclip
|
https://api.github.com/repos/mapbox/polyclip
|
opened
|
Handle more edge overlapping cases
|
compatibility
|
Currently we only handle the first 6 cases, 3 left to implement:
```
---x---
---x---
------
---x-----
------
---x----x---
---x---
----
------
------
------
-----x---
---x--x---
----
---x---
----
---x---
---x---
```
|
True
|
Handle more edge overlapping cases - Currently we only handle the first 6 cases, 3 left to implement:
```
---x---
---x---
------
---x-----
------
---x----x---
---x---
----
------
------
------
-----x---
---x--x---
----
---x---
----
---x---
---x---
```
|
comp
|
handle more edge overlapping cases currently we only handle the first cases left to implement x x x x x x x x x x x x
| 1
|
4,623
| 5,217,501,676
|
IssuesEvent
|
2017-01-26 14:07:10
|
w3c/sensors
|
https://api.github.com/repos/w3c/sensors
|
closed
|
Frequency readout allows fingerprinting sensors, privacy.
|
ED-ready privacy resolved security
|
Hi,
I will put it here just not to forget, we should consider writing something specifying that it will be possible to poll the supported frequencies (hardware/implementation related).
1. Loop through the polled frequencies (e.g. 0 .. 500)
2. Inspect the sensor readout, take into account the readout's timestamp
3. Verify how many readouts per second are supported
4. In the end, it could be possible to inspect the actually supported frequencies.
We could add something:
"Frequency polling in periodic reporting mode might allow the fingerprinting of hardware or implementation types, by probing which actual frequencies are supported by the platform."
|
True
|
Frequency readout allows fingerprinting sensors, privacy. - Hi,
I will put it here just not to forget, we should consider writing something specifying that it will be possible to poll the supported frequencies (hardware/implementation related).
1. Loop through the polled frequencies (e.g. 0 .. 500)
2. Inspect the sensor readout, take into account the readout's timestamp
3. Verify how many readouts per second are supported
4. In the end, it could be possible to inspect the actually supported frequencies.
We could add something:
"Frequency polling in periodic reporting mode might allow the fingerprinting of hardware or implementation types, by probing which actual frequencies are supported by the platform."
|
non_comp
|
frequency readout allows fingerprinting sensors privacy hi i will put it here just not to forget we should consider writing something specifying that it will be possible to poll the supported frequencies hardware implementation related loop through the polled frequencies e g inspect the sensor readout take into account the readout s timestamp verify how many readouts per second are supported in the end it could be possible to inspect the actually supported frequencies we could add something frequency polling in periodic reporting mode might allow the fingerprinting of hardware or implementation types by probing which actual frequencies are supported by the platform
| 0
|
308,197
| 23,237,421,121
|
IssuesEvent
|
2022-08-03 13:01:03
|
contentauth/c2pa-js
|
https://api.github.com/repos/contentauth/c2pa-js
|
closed
|
Update pull request template
|
documentation
|
The default pull request template is quite long and complex. We might want to use something closer to the one that c2pa-rs uses:
```markdown
## Changes in This Pull Request
_Give a narrative description of what has been changed._
## Checklist
- [ ] This PR represents a single feature, fix, or change.
- [ ] All applicable changes have been documented.
- [ ] Any `TO DO` items (or similar) have been entered as GitHub issues and the link to that issue has been included in a comment.
```
|
1.0
|
Update pull request template - The default pull request template is quite long and complex. We might want to use something closer to the one that c2pa-rs uses:
```markdown
## Changes in This Pull Request
_Give a narrative description of what has been changed._
## Checklist
- [ ] This PR represents a single feature, fix, or change.
- [ ] All applicable changes have been documented.
- [ ] Any `TO DO` items (or similar) have been entered as GitHub issues and the link to that issue has been included in a comment.
```
|
non_comp
|
update pull request template the default pull request template is quite long and complex we might want to use something closer to the one that rs uses markdown changes in this pull request give a narrative description of what has been changed checklist this pr represents a single feature fix or change all applicable changes have been documented any to do items or similar have been entered as github issues and the link to that issue has been included in a comment
| 0
|
10,958
| 12,974,050,316
|
IssuesEvent
|
2020-07-21 14:53:37
|
grondag/canvas
|
https://api.github.com/repos/grondag/canvas
|
closed
|
No Overlays from other mods
|
compatibility
|
Canvas does not allow other mods to render overlays (LightOverlay, Bounding Box Outline Reloaded, MiniHUD's shapes).
[latest.log](https://github.com/grondag/canvas/files/4926288/latest.log)
|
True
|
No Overlays from other mods - Canvas does not allow other mods to render overlays (LightOverlay, Bounding Box Outline Reloaded, MiniHUD's shapes).
[latest.log](https://github.com/grondag/canvas/files/4926288/latest.log)
|
comp
|
no overlays from other mods canvas does not allow other mods to render overlays lightoverlay bounding box outline reloaded minihud s shapes
| 1
|
14,093
| 10,602,805,250
|
IssuesEvent
|
2019-10-10 14:50:55
|
dotnet/wpf
|
https://api.github.com/repos/dotnet/wpf
|
opened
|
Enabling Roslyn analyzers
|
area-infrastructure up-for-grabs
|
As a team, we'd like to get a core set of Roslyn code analyzers fully integrated into our build to help ensure the long term health of the code base. Much of WPF was written 10+ years ago and doesn't necessarily follow the best practices. It's important to understand what can and can't change, **no change as part of this work should change the behavior**.
### Scope of work
- [ ] Determine which rules should and shouldn't be enabled.
- [ ] If a rule should be suppressed entirely, add the suppression to the code analysis ruleset in eng\WpfArcadeSdk\tools\CodeAnalysis\WpfCodeAnalysis.ruleset
- [ ] If a rule needs to be suppressed, but will never be fixed (i.e. for compat reasons) put the suppression in a GlobalSuppressions.cs file.
- [ ] If a rule needs to be suppressed, but should be addressed a later time, put the suppression at the callsite.
- [ ] If a rule is to be fixed, use the associated code-fixer to make the changes, instead of by hand.
### Projects to enable
Here are the assemblies that we should focus on. Ideally done one at a time so that PR's are easy to read and understand. They can follow what we did with `System.Xaml` for an example. It's recommended to start with one of the smaller projects first :)
- [ ] System.Windows.Controls.Ribbon
- [ ] ReachFramework
- [ ] UIAutomationClient
- [ ] UIAutomationProvider
- [ ] UIAutomationTypes
- [ ] PresentationBuildTasks
- [ ] PresentationCore
- [ ] PresentationFramework
- [ ] WindowsBase
**Note: this work isn't necessary for test projects**
### Methodology
- To make this work easier to understand for the maintainers, it should be easy to differentiate at the commit/pr level what was a suppression versus what was a fix. This is not a hard requirement, but it will make the life of anyone reviewing the code much easier.
- Add `<EnableAnalyzers>true</EnableAnalyzers>` to the .csproj file and build. There will most likely be a lot of errors, this is where the fun begins :)
- Part of determining the proper rules is based on the current state of the code, and not necessarily what is ideal. Too many changes to the code will result in PR's that are too lengthy and cumbersome to review, which isn't ideal for anyone. Separate issues can be filed if there are certain rules we want to enable in the future.
### Code-fixers
We should run the code-fixers for each analyzer with the enabled rules and auto-update the source code, rather than making the edits by hand. Some of the fixers may not be implemented, and it would be extra valuable to go and add the fixer so that everyone in the .NET community can benefit from it. I did this for one of the fixers early on, and here is an example PR: https://github.com/dotnet/roslyn-analyzers/pull/2166
All the analyzers and associated code-fixers are in this repo:
https://github.com/dotnet/roslyn-analyzers
Running the code-fixers can be done when the solution is loaded in Visual Studio, or by using this tool (I don't know what state this tool is in or if it works, there may be bugs)
https://github.com/stevenbrix/AnalyzerRunner
This is work that we've had on our backlog for a while, but isn't the highest priority at the moment for us. There have been a lot of code cleanup/beautification PR's to the WPF code base since it's been open-sourced (thank you everyone!), and so I'm hoping to formalize this work a bit and open it up to anyone that is interested in defining the best-practices in this code base, as well as learn a bit about Roslyn analyzers while doing it!
/cc @YoshihiroIto who expressed interest in learning more about Roslyn analyzers.
|
1.0
|
Enabling Roslyn analyzers - As a team, we'd like to get a core set of Roslyn code analyzers fully integrated into our build to help ensure the long term health of the code base. Much of WPF was written 10+ years ago and doesn't necessarily follow the best practices. It's important to understand what can and can't change, **no change as part of this work should change the behavior**.
### Scope of work
- [ ] Determine which rules should and shouldn't be enabled.
- [ ] If a rule should be suppressed entirely, add the suppression to the code analysis ruleset in eng\WpfArcadeSdk\tools\CodeAnalysis\WpfCodeAnalysis.ruleset
- [ ] If a rule needs to be suppressed, but will never be fixed (i.e. for compat reasons) put the suppression in a GlobalSuppressions.cs file.
- [ ] If a rule needs to be suppressed, but should be addressed a later time, put the suppression at the callsite.
- [ ] If a rule is to be fixed, use the associated code-fixer to make the changes, instead of by hand.
### Projects to enable
Here are the assemblies that we should focus on. Ideally done one at a time so that PR's are easy to read and understand. They can follow what we did with `System.Xaml` for an example. It's recommended to start with one of the smaller projects first :)
- [ ] System.Windows.Controls.Ribbon
- [ ] ReachFramework
- [ ] UIAutomationClient
- [ ] UIAutomationProvider
- [ ] UIAutomationTypes
- [ ] PresentationBuildTasks
- [ ] PresentationCore
- [ ] PresentationFramework
- [ ] WindowsBase
**Note: this work isn't necessary for test projects**
### Methodology
- To make this work easier to understand for the maintainers, it should be easy to differentiate at the commit/pr level what was a suppression versus what was a fix. This is not a hard requirement, but it will make the life of anyone reviewing the code much easier.
- Add `<EnableAnalyzers>true</EnableAnalyzers>` to the .csproj file and build. There will most likely be a lot of errors, this is where the fun begins :)
- Part of determining the proper rules is based on the current state of the code, and not necessarily what is ideal. Too many changes to the code will result in PR's that are too lengthy and cumbersome to review, which isn't ideal for anyone. Separate issues can be filed if there are certain rules we want to enable in the future.
### Code-fixers
We should run the code-fixers for each analyzer with the enabled rules and auto-update the source code, rather than making the edits by hand. Some of the fixers may not be implemented, and it would be extra valuable to go and add the fixer so that everyone in the .NET community can benefit from it. I did this for one of the fixers early on, and here is an example PR: https://github.com/dotnet/roslyn-analyzers/pull/2166
All the analyzers and associated code-fixers are in this repo:
https://github.com/dotnet/roslyn-analyzers
Running the code-fixers can be done when the solution is loaded in Visual Studio, or by using this tool (I don't know what state this tool is in or if it works, there may be bugs)
https://github.com/stevenbrix/AnalyzerRunner
This is work that we've had on our backlog for a while, but isn't the highest priority at the moment for us. There have been a lot of code cleanup/beautification PR's to the WPF code base since it's been open-sourced (thank you everyone!), and so I'm hoping to formalize this work a bit and open it up to anyone that is interested in defining the best-practices in this code base, as well as learn a bit about Roslyn analyzers while doing it!
/cc @YoshihiroIto who expressed interest in learning more about Roslyn analyzers.
|
non_comp
|
enabling roslyn analyzers as a team we d like to get a core set of roslyn code analyzers fully integrated into our build to help ensure the long term health of the code base much of wpf was written years ago and doesn t necessarily follow the best practices it s important to understand what can and can t change no change as part of this work should change the behavior scope of work determine which rules should and shouldn t be enabled if a rule should be suppressed entirely add the suppression to the code analysis ruleset in eng wpfarcadesdk tools codeanalysis wpfcodeanalysis ruleset if a rule needs to be suppressed but will never be fixed i e for compat reasons put the suppression in a globalsuppressions cs file if a rule needs to be suppressed but should be addressed a later time put the suppression at the callsite if a rule is to be fixed use the associated code fixer to make the changes instead of by hand projects to enable here are the assemblies that we should focus on ideally done one at a time so that pr s are easy to read and understand they can follow what we did with system xaml for an example it s recommended to start with one of the smaller projects first system windows controls ribbon reachframework uiautomationclient uiautomationprovider uiautomationtypes presentationbuildtasks presentationcore presentationframework windowsbase note this work isn t necessary for test projects methodology to make this work easier to understand for the maintainers it should be easy to differentiate at the commit pr level what was a suppression versus what was a fix this is not a hard requirement but it will make the life of anyone reviewing the code much easier add true to the csproj file and build there will most likely be a lot of errors this is where the fun begins part of determining the proper rules is based on the current state of the code and not necessarily what is ideal too many changes to the code will result in pr s that are too lengthy and cumbersome to review which isn t ideal for anyone separate issues can be filed if there are certain rules we want to enable in the future code fixers we should run the code fixers for each analyzer with the enabled rules and auto update the source code rather than making the edits by hand some of the fixers may not be implemented and it would be extra valuable to go and add the fixer so that everyone in the net community can benefit from it i did this for one of the fixers early on and here is an example pr all the analyzers and associated code fixers are in this repo running the code fixers can be done when the solution is loaded in visual studio or by using this tool i don t know what state this tool is in or if it works there may be bugs this is work that we ve had on our backlog for a while but isn t the highest priority at the moment for us there have been a lot of code cleanup beautification pr s to the wpf code base since it s been open sourced thank you everyone and so i m hoping to formalize this work a bit and open it up to anyone that is interested in defining the best practices in this code base as well as learn a bit about roslyn analyzers while doing it cc yoshihiroito who expressed interest in learning more about roslyn analyzers
| 0
|
214,375
| 16,583,539,370
|
IssuesEvent
|
2021-05-31 15:01:55
|
ManimCommunity/ManimPango
|
https://api.github.com/repos/ManimCommunity/ManimPango
|
closed
|
Don't depend on Manim for testing
|
tests
|
Currently, for testing, we require ManimCE to be installed, but that causes a circular dependency even though it is just for testing. My suggestion is not to depend on Manim, instead implement those classes in the test suite.
|
1.0
|
Don't depend on Manim for testing - Currently, for testing, we require ManimCE to be installed, but that causes a circular dependency even though it is just for testing. My suggestion is not to depend on Manim, instead implement those classes in the test suite.
|
non_comp
|
don t depend on manim for testing currently for testing we require manimce to be installed but that causes a circular dependency even though it is just for testing my suggestion is not to depend on manim instead implement those classes in the test suite
| 0
|
95,703
| 19,750,636,807
|
IssuesEvent
|
2022-01-15 03:16:04
|
decentralized-identity/sidetree
|
https://api.github.com/repos/decentralized-identity/sidetree
|
closed
|
Make applyFirstValidOperation in Resolver return previous did state instead of undefined
|
code refactoring
|
- Make sure no side effects if this change is taken.
- Make applyFirstValidOperation in Resolver return previous did state instead of undefined when no operation can be applied.
|
1.0
|
Make applyFirstValidOperation in Resolver return previous did state instead of undefined - - Make sure no side effects if this change is taken.
- Make applyFirstValidOperation in Resolver return previous did state instead of undefined when no operation can be applied.
|
non_comp
|
make applyfirstvalidoperation in resolver return previous did state instead of undefined make sure no side effects if this change is taken make applyfirstvalidoperation in resolver return previous did state instead of undefined when no operation can be applied
| 0
|
8,497
| 10,518,204,168
|
IssuesEvent
|
2019-09-29 09:07:35
|
Johni0702/BetterPortals
|
https://api.github.com/repos/Johni0702/BetterPortals
|
closed
|
Mobs are rendered but not properly processed through a portal
|
bug compatibility
|
I tried building a farming setup for pigmen where they would fall through a nether portal, to die to fall damage on the other side.
What I noticed was that mobs properly fall through the portal, up to a certain distance at which point they froze in midair. If I made them hit a floor before that, they could die normally, but their corpses froze aswell.
Mobs seem to have no AI ‘on the other side’, but I guess they are somewhat being processed still. I can drop an item through and see it fall.
If I pass through the portal myself, all the frozen corpses properly disintegrate into smoke and drops, aswell as mobs moving.
Now this all might be an intentional, but maybe undesired limitation to loading the other dimension remotely.
(Same results for with or without using a cubic world)
|
True
|
Mobs are rendered but not properly processed through a portal - I tried building a farming setup for pigmen where they would fall through a nether portal, to die to fall damage on the other side.
What I noticed was that mobs properly fall through the portal, up to a certain distance at which point they froze in midair. If I made them hit a floor before that, they could die normally, but their corpses froze aswell.
Mobs seem to have no AI ‘on the other side’, but I guess they are somewhat being processed still. I can drop an item through and see it fall.
If I pass through the portal myself, all the frozen corpses properly disintegrate into smoke and drops, aswell as mobs moving.
Now this all might be an intentional, but maybe undesired limitation to loading the other dimension remotely.
(Same results for with or without using a cubic world)
|
comp
|
mobs are rendered but not properly processed through a portal i tried building a farming setup for pigmen where they would fall through a nether portal to die to fall damage on the other side what i noticed was that mobs properly fall through the portal up to a certain distance at which point they froze in midair if i made them hit a floor before that they could die normally but their corpses froze aswell mobs seem to have no ai ‘on the other side’ but i guess they are somewhat being processed still i can drop an item through and see it fall if i pass through the portal myself all the frozen corpses properly disintegrate into smoke and drops aswell as mobs moving now this all might be an intentional but maybe undesired limitation to loading the other dimension remotely same results for with or without using a cubic world
| 1
|
101,415
| 16,509,924,984
|
IssuesEvent
|
2021-05-26 01:51:34
|
kijunb33/c
|
https://api.github.com/repos/kijunb33/c
|
opened
|
CVE-2019-17563 (High) detected in tomcat-embed-core-7.0.90.jar
|
security vulnerability
|
## CVE-2019-17563 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tomcat-embed-core-7.0.90.jar</b></p></summary>
<p>Core Tomcat implementation</p>
<p>Library home page: <a href="https://tomcat.apache.org/">https://tomcat.apache.org/</a></p>
<p>Path to vulnerable library: c/tomcat-embed-core-7.0.90.jar</p>
<p>
Dependency Hierarchy:
- :x: **tomcat-embed-core-7.0.90.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://api.github.com/repos/kijunb33/c/commits/7dc3b09e73910975c94cebd1e54f33c4097c695b">7dc3b09e73910975c94cebd1e54f33c4097c695b</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
When using FORM authentication with Apache Tomcat 9.0.0.M1 to 9.0.29, 8.5.0 to 8.5.49 and 7.0.0 to 7.0.98 there was a narrow window where an attacker could perform a session fixation attack. The window was considered too narrow for an exploit to be practical but, erring on the side of caution, this issue has been treated as a security vulnerability.
<p>Publish Date: 2019-12-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-17563>CVE-2019-17563</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17563">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17563</a></p>
<p>Release Date: 2019-12-23</p>
<p>Fix Resolution: org.apache.tomcat.embed:tomcat-embed-core:7.0.99,8.5.50,9.0.30;org.apache.tomcat:tomcat-catalina:7.0.99,8.5.50,9.0.30</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2019-17563 (High) detected in tomcat-embed-core-7.0.90.jar - ## CVE-2019-17563 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tomcat-embed-core-7.0.90.jar</b></p></summary>
<p>Core Tomcat implementation</p>
<p>Library home page: <a href="https://tomcat.apache.org/">https://tomcat.apache.org/</a></p>
<p>Path to vulnerable library: c/tomcat-embed-core-7.0.90.jar</p>
<p>
Dependency Hierarchy:
- :x: **tomcat-embed-core-7.0.90.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://api.github.com/repos/kijunb33/c/commits/7dc3b09e73910975c94cebd1e54f33c4097c695b">7dc3b09e73910975c94cebd1e54f33c4097c695b</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
When using FORM authentication with Apache Tomcat 9.0.0.M1 to 9.0.29, 8.5.0 to 8.5.49 and 7.0.0 to 7.0.98 there was a narrow window where an attacker could perform a session fixation attack. The window was considered too narrow for an exploit to be practical but, erring on the side of caution, this issue has been treated as a security vulnerability.
<p>Publish Date: 2019-12-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-17563>CVE-2019-17563</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17563">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17563</a></p>
<p>Release Date: 2019-12-23</p>
<p>Fix Resolution: org.apache.tomcat.embed:tomcat-embed-core:7.0.99,8.5.50,9.0.30;org.apache.tomcat:tomcat-catalina:7.0.99,8.5.50,9.0.30</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_comp
|
cve high detected in tomcat embed core jar cve high severity vulnerability vulnerable library tomcat embed core jar core tomcat implementation library home page a href path to vulnerable library c tomcat embed core jar dependency hierarchy x tomcat embed core jar vulnerable library found in head commit a href found in base branch main vulnerability details when using form authentication with apache tomcat to to and to there was a narrow window where an attacker could perform a session fixation attack the window was considered too narrow for an exploit to be practical but erring on the side of caution this issue has been treated as a security vulnerability publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache tomcat embed tomcat embed core org apache tomcat tomcat catalina step up your open source security game with whitesource
| 0
|
135,780
| 19,663,401,473
|
IssuesEvent
|
2022-01-10 19:30:17
|
department-of-veterans-affairs/va.gov-team
|
https://api.github.com/repos/department-of-veterans-affairs/va.gov-team
|
closed
|
[Design] My VA: Update copy on My VA to align with Content direction
|
design my-va-dashboard vsa-authenticated-exp needs-grooming
|
## Background
Per comment on [ticket #33839](https://github.com/department-of-veterans-affairs/va.gov-team/issues/33839), we should align better with the advisement from our Content partners.
- We need to update the header for "Claims & Appeals" to replace the ampersand
- We need to update the Health Care section for unread messages to reword and move the URL to different text
## Tasks
**Design changes**
- [ ] Update design documentation to reflect the changes needed
- Update header "Claims & appeals" to read "**Claims and appeals**"
- Update Health care section on the messages text
- Replace [You have n unread messages] with new verbiage "**You have n unread messages. [View your messages].**"
## Acceptance Criteria
- My VA page reflects the updates in accordance with Content advisement
|
1.0
|
[Design] My VA: Update copy on My VA to align with Content direction - ## Background
Per comment on [ticket #33839](https://github.com/department-of-veterans-affairs/va.gov-team/issues/33839), we should align better with the advisement from our Content partners.
- We need to update the header for "Claims & Appeals" to replace the ampersand
- We need to update the Health Care section for unread messages to reword and move the URL to different text
## Tasks
**Design changes**
- [ ] Update design documentation to reflect the changes needed
- Update header "Claims & appeals" to read "**Claims and appeals**"
- Update Health care section on the messages text
- Replace [You have n unread messages] with new verbiage "**You have n unread messages. [View your messages].**"
## Acceptance Criteria
- My VA page reflects the updates in accordance with Content advisement
|
non_comp
|
my va update copy on my va to align with content direction background per comment on we should align better with the advisement from our content partners we need to update the header for claims appeals to replace the ampersand we need to update the health care section for unread messages to reword and move the url to different text tasks design changes update design documentation to reflect the changes needed update header claims appeals to read claims and appeals update health care section on the messages text replace with new verbiage you have n unread messages acceptance criteria my va page reflects the updates in accordance with content advisement
| 0
|
16,681
| 22,959,742,163
|
IssuesEvent
|
2022-07-19 14:29:12
|
MetaMask/metamask-mobile
|
https://api.github.com/repos/MetaMask/metamask-mobile
|
closed
|
Window.ethereum not being injected on Android (again?)
|
bug dapp-compatibility severity2-normal community
|
I'm aware of this having been a previous issue, but it still seems to be happening with Android 11 (and I just updated to August 5 security update).
As an example of my implementation, you can check https://tossaneth.com/eth, where pressing the send button uses window.ethereum for the transaction if it exists, otherwise it serves as a deep link to the same url. It works fine on iOS.
|
True
|
Window.ethereum not being injected on Android (again?) - I'm aware of this having been a previous issue, but it still seems to be happening with Android 11 (and I just updated to August 5 security update).
As an example of my implementation, you can check https://tossaneth.com/eth, where pressing the send button uses window.ethereum for the transaction if it exists, otherwise it serves as a deep link to the same url. It works fine on iOS.
|
comp
|
window ethereum not being injected on android again i m aware of this having been a previous issue but it still seems to be happening with android and i just updated to august security update as an example of my implementation you can check where pressing the send button uses window ethereum for the transaction if it exists otherwise it serves as a deep link to the same url it works fine on ios
| 1
|
14,797
| 18,234,248,899
|
IssuesEvent
|
2021-10-01 03:41:18
|
gambitph/Stackable
|
https://api.github.com/repos/gambitph/Stackable
|
closed
|
[Toolset Integration] When Toolset is activated, the advancedSelectControl is huge
|
bug need more info [version] V3 v2 compatibility
|
<!--
Before posting, make sure that:
1. you are running the latest version of Stackable, and
2. you have searched whether your issue has already been reported
-->
**To Reproduce**
Steps to reproduce the behavior:
1. Activate Toolset Blocks / Types
2. Add a v2 / v3 Card
3. Go to Advanced tab
4. Collapse the General panel - see bug
**Screenshots**
https://user-images.githubusercontent.com/28699204/134893927-4323cafc-8da6-49f5-9f53-bf9aefa55acb.mov
|
True
|
[Toolset Integration] When Toolset is activated, the advancedSelectControl is huge - <!--
Before posting, make sure that:
1. you are running the latest version of Stackable, and
2. you have searched whether your issue has already been reported
-->
**To Reproduce**
Steps to reproduce the behavior:
1. Activate Toolset Blocks / Types
2. Add a v2 / v3 Card
3. Go to Advanced tab
4. Collapse the General panel - see bug
**Screenshots**
https://user-images.githubusercontent.com/28699204/134893927-4323cafc-8da6-49f5-9f53-bf9aefa55acb.mov
|
comp
|
when toolset is activated the advancedselectcontrol is huge before posting make sure that you are running the latest version of stackable and you have searched whether your issue has already been reported to reproduce steps to reproduce the behavior activate toolset blocks types add a card go to advanced tab collapse the general panel see bug screenshots
| 1
|
1,979
| 3,226,529,310
|
IssuesEvent
|
2015-10-10 10:50:04
|
adobe/brackets
|
https://api.github.com/repos/adobe/brackets
|
closed
|
RESEARCH: Measure performance against other editors
|
performance release-14 STORY
|
## Areas to measure
* Find in Files
* Switching between / opening files
* Typing (https://github.com/adobe/brackets/issues/11104)
* Scrolling (https://github.com/adobe/brackets/issues/11105)
* Startup time
These can be categorized as mostly rendering (Typing & Scrolling),
mostly JavaScript (Find in Files), and a mix of the two (Switching between / opening files)
Out of scope:
* Ways to track key performance metrics regularly (per build or per sprint) and graph them -- see [story #357 (Performance Automation)](https://trello.com/c/RWwPlSeO/357-2-performance-automation)
* Ways we could automatically gather performance stats from Brackets in the field as metrics -- fold into Brackets Health Report story
## Which editors should we compare against?
- Sublime
- Atom
- Visual Studio Code
- (Out of scope: Visual Studio, XCode, Notepad++, etc.)
## Measurement approach
- High speed camera: Typing, Scrolling, Switching between files
- Stopwatch: Find in Files (other editors)
- PerfUtils instrumentation: Find in Files (Brackets)
## Project Size
We want to test these features across small, medium and large projects. Large project is defined 30,000+ files.
What do we consider a small, medium and large Project? Large file? Should we test typing with code hints (and if so which kind), or without?
Do we have example projects that we could leverage for this data collection? Using real world projects would make it definitely more representative.
## Environment
Must use the same machine for all tests to ensure comparable results. Run all tests twice, on:
- Windows 7
- OSX 10.10.3 Retina
- (Out of scope: Windows 8, Mac OSX 10.10 non-Retina, etc.)
## Tasks
- [ ] Define test file(s) for Scrolling/Typing testing
- [ ] Define test project for Find in Files testing
- [ ] Define specific test steps for Typing
- [ ] Define specific test steps for Scrolling
- [ ] Define specific test steps for Switching editors
- [ ] Define specific test steps for Find in Files
- [ ] Decide test hardware for both OSes
- [ ] Install all editors on the test machines
- [ ] Capture high speed camera data (Typing, Scrolling, Switching)
- [ ] Analyze high-speed camera data (Typing, Scrolling, Switching)
- [ ] Find in Files measurements
- [ ] Write up summary report & suggested areas of focus
|
True
|
RESEARCH: Measure performance against other editors - ## Areas to measure
* Find in Files
* Switching between / opening files
* Typing (https://github.com/adobe/brackets/issues/11104)
* Scrolling (https://github.com/adobe/brackets/issues/11105)
* Startup time
These can be categorized as mostly rendering (Typing & Scrolling),
mostly JavaScript (Find in Files), and a mix of the two (Switching between / opening files)
Out of scope:
* Ways to track key performance metrics regularly (per build or per sprint) and graph them -- see [story #357 (Performance Automation)](https://trello.com/c/RWwPlSeO/357-2-performance-automation)
* Ways we could automatically gather performance stats from Brackets in the field as metrics -- fold into Brackets Health Report story
## Which editors should we compare against?
- Sublime
- Atom
- Visual Studio Code
- (Out of scope: Visual Studio, XCode, Notepad++, etc.)
## Measurement approach
- High speed camera: Typing, Scrolling, Switching between files
- Stopwatch: Find in Files (other editors)
- PerfUtils instrumentation: Find in Files (Brackets)
## Project Size
We want to test these features across small, medium and large projects. Large project is defined 30,000+ files.
What do we consider a small, medium and large Project? Large file? Should we test typing with code hints (and if so which kind), or without?
Do we have example projects that we could leverage for this data collection? Using real world projects would make it definitely more representative.
## Environment
Must use the same machine for all tests to ensure comparable results. Run all tests twice, on:
- Windows 7
- OSX 10.10.3 Retina
- (Out of scope: Windows 8, Mac OSX 10.10 non-Retina, etc.)
## Tasks
- [ ] Define test file(s) for Scrolling/Typing testing
- [ ] Define test project for Find in Files testing
- [ ] Define specific test steps for Typing
- [ ] Define specific test steps for Scrolling
- [ ] Define specific test steps for Switching editors
- [ ] Define specific test steps for Find in Files
- [ ] Decide test hardware for both OSes
- [ ] Install all editors on the test machines
- [ ] Capture high speed camera data (Typing, Scrolling, Switching)
- [ ] Analyze high-speed camera data (Typing, Scrolling, Switching)
- [ ] Find in Files measurements
- [ ] Write up summary report & suggested areas of focus
|
non_comp
|
research measure performance against other editors areas to measure find in files switching between opening files typing scrolling startup time these can be categorized as mostly rendering typing scrolling mostly javascript find in files and a mix of the two switching between opening files out of scope ways to track key performance metrics regularly per build or per sprint and graph them see ways we could automatically gather performance stats from brackets in the field as metrics fold into brackets health report story which editors should we compare against sublime atom visual studio code out of scope visual studio xcode notepad etc measurement approach high speed camera typing scrolling switching between files stopwatch find in files other editors perfutils instrumentation find in files brackets project size we want to test these features across small medium and large projects large project is defined files what do we consider a small medium and large project large file should we test typing with code hints and if so which kind or without do we have example projects that we could leverage for this data collection using real world projects would make it definitely more representative environment must use the same machine for all tests to ensure comparable results run all tests twice on windows osx retina out of scope windows mac osx non retina etc tasks define test file s for scrolling typing testing define test project for find in files testing define specific test steps for typing define specific test steps for scrolling define specific test steps for switching editors define specific test steps for find in files decide test hardware for both oses install all editors on the test machines capture high speed camera data typing scrolling switching analyze high speed camera data typing scrolling switching find in files measurements write up summary report suggested areas of focus
| 0
|
16,286
| 21,923,041,486
|
IssuesEvent
|
2022-05-22 21:22:49
|
raiguard/krastorio-2
|
https://api.github.com/repos/raiguard/krastorio-2
|
closed
|
Compatability issue with angelspetrochem_0.9.21 Error Non-contiguous Technology levels. CTD on launch. Possible solution?
|
bug compatibility
|
### Description
Error ModManager.cpp:1578: Technology chlorine-processing: Non-contiguous levels: 2, followed by 4
Mod settings wiped, fresh install.
Likely caused by a possible oversight in compatability script:
compatibility-scripts/data-final-fixes/angels/angelspetrochem.lua line 57
Following the condition in line 55 technology "chlorine-processing-3" is removed (line 57), yet "chlorine-processing-4" presumably added by angelspetrochem still exists and unchanged which causes an error.
Adding the following code fixed the crash, yet I am not sure if this fix is final.
`data.raw.technology["chlorine-processing-4"] = nil`
`krastorio.technologies.convertPrerequisiteFromAllTechnologies(
"chlorine-processing-4",
"chlorine-processing-2",
true
)`
[factorio-current.log](https://github.com/raiguard/krastorio-2/files/8653600/factorio-current.log)
### Reproduction
1. Install:
angelsrefining 0.12.1
angelspetrochem 0.9.21
Krastorio2 1.2.25
2. Launch the game
### Factorio version
1.1.57
### In-game username
SilentGraph
|
True
|
Compatability issue with angelspetrochem_0.9.21 Error Non-contiguous Technology levels. CTD on launch. Possible solution? - ### Description
Error ModManager.cpp:1578: Technology chlorine-processing: Non-contiguous levels: 2, followed by 4
Mod settings wiped, fresh install.
Likely caused by a possible oversight in compatability script:
compatibility-scripts/data-final-fixes/angels/angelspetrochem.lua line 57
Following the condition in line 55 technology "chlorine-processing-3" is removed (line 57), yet "chlorine-processing-4" presumably added by angelspetrochem still exists and unchanged which causes an error.
Adding the following code fixed the crash, yet I am not sure if this fix is final.
`data.raw.technology["chlorine-processing-4"] = nil`
`krastorio.technologies.convertPrerequisiteFromAllTechnologies(
"chlorine-processing-4",
"chlorine-processing-2",
true
)`
[factorio-current.log](https://github.com/raiguard/krastorio-2/files/8653600/factorio-current.log)
### Reproduction
1. Install:
angelsrefining 0.12.1
angelspetrochem 0.9.21
Krastorio2 1.2.25
2. Launch the game
### Factorio version
1.1.57
### In-game username
SilentGraph
|
comp
|
compatability issue with angelspetrochem error non contiguous technology levels ctd on launch possible solution description error modmanager cpp technology chlorine processing non contiguous levels followed by mod settings wiped fresh install likely caused by a possible oversight in compatability script compatibility scripts data final fixes angels angelspetrochem lua line following the condition in line technology chlorine processing is removed line yet chlorine processing presumably added by angelspetrochem still exists and unchanged which causes an error adding the following code fixed the crash yet i am not sure if this fix is final data raw technology nil krastorio technologies convertprerequisitefromalltechnologies chlorine processing chlorine processing true reproduction install angelsrefining angelspetrochem launch the game factorio version in game username silentgraph
| 1
|
92
| 2,551,382,605
|
IssuesEvent
|
2015-02-02 09:08:10
|
stapelberg/my-issue-test
|
https://api.github.com/repos/stapelberg/my-issue-test
|
closed
|
hotkey configuration doesn't respect alternating keyboard layouts
|
C: compatibility R: duplicate
|
**Reported by dothebart@… on 26 Aug 2009 08:43 UTC**
Being a user of the dvorak layout, http://dvorak.mwbrooks.com/; I found out the hard way that Mod1-E is exit, since my ';' is there and I wanted to test the tiling.
Check out with
setxkbmap dvorak
or change the layout in the xorg.conf
|
True
|
hotkey configuration doesn't respect alternating keyboard layouts - **Reported by dothebart@… on 26 Aug 2009 08:43 UTC**
Being a user of the dvorak layout, http://dvorak.mwbrooks.com/; I found out the hard way that Mod1-E is exit, since my ';' is there and I wanted to test the tiling.
Check out with
setxkbmap dvorak
or change the layout in the xorg.conf
|
comp
|
hotkey configuration doesn t respect alternating keyboard layouts reported by dothebart … on aug utc being a user of the dvorak layout i found out the hard way that e is exit since my is there and i wanted to test the tiling check out with setxkbmap dvorak or change the layout in the xorg conf
| 1
|
156,654
| 13,652,660,128
|
IssuesEvent
|
2020-09-27 08:43:27
|
AzureAD/microsoft-authentication-extensions-for-dotnet
|
https://api.github.com/repos/AzureAD/microsoft-authentication-extensions-for-dotnet
|
closed
|
link to linux fallback configuration example does not work
|
Fixed bug documentation
|
I'm trying to test whether token caching works for our desktop app on linux, but I can't do it over ssh. I noticed the wiki states
> KeyRings does not work in headless mode (e.g. when connected over SSH or when running Linux in a container) due to a dependency on X11. To overcome this, a fallback to a plaintext file can be configured. See this example for how to configure it.
But when I click the link to the example, I get a github 404 error. Can the wiki please be updated?
|
1.0
|
link to linux fallback configuration example does not work - I'm trying to test whether token caching works for our desktop app on linux, but I can't do it over ssh. I noticed the wiki states
> KeyRings does not work in headless mode (e.g. when connected over SSH or when running Linux in a container) due to a dependency on X11. To overcome this, a fallback to a plaintext file can be configured. See this example for how to configure it.
But when I click the link to the example, I get a github 404 error. Can the wiki please be updated?
|
non_comp
|
link to linux fallback configuration example does not work i m trying to test whether token caching works for our desktop app on linux but i can t do it over ssh i noticed the wiki states keyrings does not work in headless mode e g when connected over ssh or when running linux in a container due to a dependency on to overcome this a fallback to a plaintext file can be configured see this example for how to configure it but when i click the link to the example i get a github error can the wiki please be updated
| 0
|
280,138
| 8,678,339,458
|
IssuesEvent
|
2018-11-30 19:36:06
|
googleapis/google-cloud-python
|
https://api.github.com/repos/googleapis/google-cloud-python
|
closed
|
Synthesis failed for kms
|
autosynth failure priority: p1 type: bug
|
Hello! Autosynth couldn't regenerate kms. :broken_heart:
Here's the output from running `synth.py`:
```
Cloning into 'working_repo'...
Switched to branch 'autosynth-kms'
[35msynthtool > [31m[43mYou are running the synthesis script directly, this will be disabled in a future release of Synthtool. Please use python3 -m synthtool instead.[0m
[35msynthtool > [36mEnsuring dependencies.[0m
[35msynthtool > [36mPulling artman image.[0m
latest: Pulling from googleapis/artman
Digest: sha256:2f6b261ee7fe1aedf238991c93a20b3820de37a343d0cacf3e3e9555c2aaf2ea
Status: Image is up to date for googleapis/artman:latest
[35msynthtool > [36mCloning googleapis.[0m
[35msynthtool > [36mRunning generator for google/cloud/kms/artman_cloudkms.yaml.[0m
[35msynthtool > [32mGenerated code into /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/kms-v1.[0m
.coveragerc
.flake8
MANIFEST.in
noxfile.py.j2
setup.cfg
Traceback (most recent call last):
File "synth.py", line 44, in <module>
s.shell.run(["nox", "-s", "blacken"], hide_output=False)
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/shell.py", line 33, in run
encoding="utf-8",
File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/subprocess.py", line 403, in run
with Popen(*popenargs, **kwargs) as process:
File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/subprocess.py", line 707, in __init__
restore_signals, start_new_session)
File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/subprocess.py", line 1326, in _execute_child
raise child_exception_type(errno_num, err_msg)
FileNotFoundError: [Errno 2] No such file or directory: 'nox'
[35msynthtool > [36mCleaned up 2 temporary directories.[0m
Synthesis failed
```
Google internal developers can see the full log [here](https://sponge/3abb58ef-0652-4854-846e-e1d9be78e73e).
|
1.0
|
Synthesis failed for kms - Hello! Autosynth couldn't regenerate kms. :broken_heart:
Here's the output from running `synth.py`:
```
Cloning into 'working_repo'...
Switched to branch 'autosynth-kms'
[35msynthtool > [31m[43mYou are running the synthesis script directly, this will be disabled in a future release of Synthtool. Please use python3 -m synthtool instead.[0m
[35msynthtool > [36mEnsuring dependencies.[0m
[35msynthtool > [36mPulling artman image.[0m
latest: Pulling from googleapis/artman
Digest: sha256:2f6b261ee7fe1aedf238991c93a20b3820de37a343d0cacf3e3e9555c2aaf2ea
Status: Image is up to date for googleapis/artman:latest
[35msynthtool > [36mCloning googleapis.[0m
[35msynthtool > [36mRunning generator for google/cloud/kms/artman_cloudkms.yaml.[0m
[35msynthtool > [32mGenerated code into /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/kms-v1.[0m
.coveragerc
.flake8
MANIFEST.in
noxfile.py.j2
setup.cfg
Traceback (most recent call last):
File "synth.py", line 44, in <module>
s.shell.run(["nox", "-s", "blacken"], hide_output=False)
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/shell.py", line 33, in run
encoding="utf-8",
File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/subprocess.py", line 403, in run
with Popen(*popenargs, **kwargs) as process:
File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/subprocess.py", line 707, in __init__
restore_signals, start_new_session)
File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/subprocess.py", line 1326, in _execute_child
raise child_exception_type(errno_num, err_msg)
FileNotFoundError: [Errno 2] No such file or directory: 'nox'
[35msynthtool > [36mCleaned up 2 temporary directories.[0m
Synthesis failed
```
Google internal developers can see the full log [here](https://sponge/3abb58ef-0652-4854-846e-e1d9be78e73e).
|
non_comp
|
synthesis failed for kms hello autosynth couldn t regenerate kms broken heart here s the output from running synth py cloning into working repo switched to branch autosynth kms are running the synthesis script directly this will be disabled in a future release of synthtool please use m synthtool instead dependencies artman image latest pulling from googleapis artman digest status image is up to date for googleapis artman latest googleapis generator for google cloud kms artman cloudkms yaml code into home kbuilder cache synthtool googleapis artman genfiles python kms coveragerc manifest in noxfile py setup cfg traceback most recent call last file synth py line in s shell run hide output false file tmpfs src git autosynth env lib site packages synthtool shell py line in run encoding utf file home kbuilder pyenv versions lib subprocess py line in run with popen popenargs kwargs as process file home kbuilder pyenv versions lib subprocess py line in init restore signals start new session file home kbuilder pyenv versions lib subprocess py line in execute child raise child exception type errno num err msg filenotfounderror no such file or directory nox up temporary directories synthesis failed google internal developers can see the full log
| 0
|
86,857
| 10,519,432,997
|
IssuesEvent
|
2019-09-29 17:57:42
|
palavatv/palava
|
https://api.github.com/repos/palavatv/palava
|
closed
|
No man pages for palava-machine and palava-machine-daemon
|
documentation
|
<a href="https://github.com/ajaissle"><img src="https://avatars.githubusercontent.com/u/2892835?v=3" align="left" width="96" height="96" hspace="10"></img></a> **Issue by [ajaissle](https://github.com/ajaissle)**
_Thursday Jan 16, 2014 at 13:27 GMT_
_Originally opened as https://github.com/palavatv/palava-machine/issues/3_
---
Hi,
please provide man pages for the following executables:
palava-machine
palava-machine-daemon
Each executable in standard binary directories should have a man page.
|
1.0
|
No man pages for palava-machine and palava-machine-daemon - <a href="https://github.com/ajaissle"><img src="https://avatars.githubusercontent.com/u/2892835?v=3" align="left" width="96" height="96" hspace="10"></img></a> **Issue by [ajaissle](https://github.com/ajaissle)**
_Thursday Jan 16, 2014 at 13:27 GMT_
_Originally opened as https://github.com/palavatv/palava-machine/issues/3_
---
Hi,
please provide man pages for the following executables:
palava-machine
palava-machine-daemon
Each executable in standard binary directories should have a man page.
|
non_comp
|
no man pages for palava machine and palava machine daemon issue by thursday jan at gmt originally opened as hi please provide man pages for the following executables palava machine palava machine daemon each executable in standard binary directories should have a man page
| 0
|
11,559
| 3,007,456,886
|
IssuesEvent
|
2015-07-27 16:09:00
|
javaslang/javaslang
|
https://api.github.com/repos/javaslang/javaslang
|
closed
|
Harmonize static factory methods of collections
|
design/refactoring
|
Now that [Seq will also have static factory methods](https://github.com/javaslang/javaslang/issues/341#issuecomment-125161860), including `Seq.empty()`, which will return the empty `List`, the user will expect to find this method in all collections. List currently has `List.nil()`. I'm no friend of redundant methods, so we will add `List.empty()` and `List.nil()` will be removed.
Same for `Stream`. Queue already has `empty`. The existing `Stack.nil()` made no sense from the beginning...
|
1.0
|
Harmonize static factory methods of collections - Now that [Seq will also have static factory methods](https://github.com/javaslang/javaslang/issues/341#issuecomment-125161860), including `Seq.empty()`, which will return the empty `List`, the user will expect to find this method in all collections. List currently has `List.nil()`. I'm no friend of redundant methods, so we will add `List.empty()` and `List.nil()` will be removed.
Same for `Stream`. Queue already has `empty`. The existing `Stack.nil()` made no sense from the beginning...
|
non_comp
|
harmonize static factory methods of collections now that including seq empty which will return the empty list the user will expect to find this method in all collections list currently has list nil i m no friend of redundant methods so we will add list empty and list nil will be removed same for stream queue already has empty the existing stack nil made no sense from the beginning
| 0
|
201,661
| 15,216,913,651
|
IssuesEvent
|
2021-02-17 16:01:36
|
kbase/execution_engine2
|
https://api.github.com/repos/kbase/execution_engine2
|
opened
|
Remove travis.yml
|
testing
|
Doesn't seem like it's necessary to run both travis and gha integrations.
Discuss with Boris when he gets back
|
1.0
|
Remove travis.yml - Doesn't seem like it's necessary to run both travis and gha integrations.
Discuss with Boris when he gets back
|
non_comp
|
remove travis yml doesn t seem like it s necessary to run both travis and gha integrations discuss with boris when he gets back
| 0
|
289,085
| 24,958,355,018
|
IssuesEvent
|
2022-11-01 13:42:39
|
usethesource/rascal
|
https://api.github.com/repos/usethesource/rascal
|
closed
|
[RELEASE] version 0.18.2 (and intermediate 0.18.1 release)
|
release testing
|
# Preliminaries
* Every time this document says "release X" ; we mean to execute the instructions of this Wiki page: https://github.com/usethesource/rascal/wiki/How-to-make-a-release-of-a-Rascal-implemenation-project
* The current release instructions are focused on the Rascal commandline tools and the Eclipse IDE plugin
* If you edit this template, then please push relevant improvements to the template itself for future reference.
# Pre-releasing dependent tools in unstable
First a "pre-release" of the supporting compiler/typechecker tools must be done, so we know we are releasing a consistently compiled standard library.
- [x] typepal and rascal-core compile in the continuous integration environment and no tests fail
- [x] release typepal
- [x] make intermediate rascal release for rascal-core to depend on
- [x] release rascal-core
- [x] bump typepal and rascal-core versions in rascal-maven-plugin to latest releases
- [x] bump typepal and rascal-core versions in rascal-eclipse to latests SNAPSHOT releases
- [x] release rascal-maven-plugin
- [x] bump rascal-maven-plugin dependency in rascal and rascal-eclipse project
- [x] fix new errors and warnings in rascal and rascal-eclipse project
# Manual version checks
- [x] Continuous Integration runs all unit and integration tests and fails no test
- [x] Maximum number of compiler warnings are resolved
- [x] Version numbers are verified manually
# Manual feature tests
- [x] Eclipse download and install latest unstable release from update site https://update.rascal-mpl.org/unstable
- [x] Open a Rascal REPL using the toolbar button
- [x] Can create new Rascal project using the wizard
- [x] Can create new Rascal module using the wizard
- [x] Can edit Rascal file in Rascal project
- [x] Save on Rascal file triggers type-checker
- [x] Rascal outline works
- [x] Rascal navigator works
- [x] Rascal navigator displays working sets
- [x] Rascal navigator displays interpreter's search path
- [x] Clicking links in REPL opens editors and websites
- [x] `rascal>1 + 1` on the REPL
- [x] `import IO; println("Hello Rascal!");`
- [x] in editor, click on use of name jumps to definition
- [x] jump-to-definition also works to library modules and inside library modules
- [x] clicking in outline jumps to editor to right position
- [x] syntax highlighting in editor works
- [x] add dependency on another project by editing `RASCAL.MF`: `Required-Libraries: |lib://otherProject|`, import a module and test the type-checker as well as the interpreter for correct resolution
- [x] `import demo::lang::Pico::Plugin; registerPico();` and test the editor of the example pico files (syntax highlighting, menu options)
- [x] open tutor view and test the search box
- [x] open tutor view and test browsing the documentation
- [x] `import demo::lang::Pico::Plugin; rascal>:edit demo::lang::Pico::Plugin` (not possible yet, see #1418)
- [ ] edit a .concept file, save it and watch the preview in the Tutor Preview view
- [ ] Tutor Preview "edit" button opens the corresponding concept file of the currently visited Concept URL
- [ ] Tutor Preview Forward/Back/Refresh buttons work
# Actual release
- [x] release rascal project (when resolving SNAPSHOT dependencies choose the right versions of vallang etc, and make sure to bump the new rascal SNAPSHOT release one minor version)
- [x] release rascal-eclipse project (take care to choose the right release versions of typepal and rascal-core you release earlier and choose their new SNAPSHOT dependencies to the latest)
- [x] change the configuration of the stable version in `update-site-nexus-link-script/refresh-nexus-data` to the released version
- [x] test the stable update site at https://update.rascal-mpl.org/stable
- [ ] write release notes and publish on the usethesource.io blog
# Downstream implications
The following items can be executed asynchronously, but are nevertheless not to be forgotten:
- [ ] change dependencies on rascal-eclipse and rascal in rascal-eclipse-libraries and the projects it depends on
- [ ] change dependencies of typepal to latest rascal and rascal-eclipse
- [ ] change dependency of rascal-core to latest stable rascal
|
1.0
|
[RELEASE] version 0.18.2 (and intermediate 0.18.1 release) - # Preliminaries
* Every time this document says "release X" ; we mean to execute the instructions of this Wiki page: https://github.com/usethesource/rascal/wiki/How-to-make-a-release-of-a-Rascal-implemenation-project
* The current release instructions are focused on the Rascal commandline tools and the Eclipse IDE plugin
* If you edit this template, then please push relevant improvements to the template itself for future reference.
# Pre-releasing dependent tools in unstable
First a "pre-release" of the supporting compiler/typechecker tools must be done, so we know we are releasing a consistently compiled standard library.
- [x] typepal and rascal-core compile in the continuous integration environment and no tests fail
- [x] release typepal
- [x] make intermediate rascal release for rascal-core to depend on
- [x] release rascal-core
- [x] bump typepal and rascal-core versions in rascal-maven-plugin to latest releases
- [x] bump typepal and rascal-core versions in rascal-eclipse to latests SNAPSHOT releases
- [x] release rascal-maven-plugin
- [x] bump rascal-maven-plugin dependency in rascal and rascal-eclipse project
- [x] fix new errors and warnings in rascal and rascal-eclipse project
# Manual version checks
- [x] Continuous Integration runs all unit and integration tests and fails no test
- [x] Maximum number of compiler warnings are resolved
- [x] Version numbers are verified manually
# Manual feature tests
- [x] Eclipse download and install latest unstable release from update site https://update.rascal-mpl.org/unstable
- [x] Open a Rascal REPL using the toolbar button
- [x] Can create new Rascal project using the wizard
- [x] Can create new Rascal module using the wizard
- [x] Can edit Rascal file in Rascal project
- [x] Save on Rascal file triggers type-checker
- [x] Rascal outline works
- [x] Rascal navigator works
- [x] Rascal navigator displays working sets
- [x] Rascal navigator displays interpreter's search path
- [x] Clicking links in REPL opens editors and websites
- [x] `rascal>1 + 1` on the REPL
- [x] `import IO; println("Hello Rascal!");`
- [x] in editor, click on use of name jumps to definition
- [x] jump-to-definition also works to library modules and inside library modules
- [x] clicking in outline jumps to editor to right position
- [x] syntax highlighting in editor works
- [x] add dependency on another project by editing `RASCAL.MF`: `Required-Libraries: |lib://otherProject|`, import a module and test the type-checker as well as the interpreter for correct resolution
- [x] `import demo::lang::Pico::Plugin; registerPico();` and test the editor of the example pico files (syntax highlighting, menu options)
- [x] open tutor view and test the search box
- [x] open tutor view and test browsing the documentation
- [x] `import demo::lang::Pico::Plugin; rascal>:edit demo::lang::Pico::Plugin` (not possible yet, see #1418)
- [ ] edit a .concept file, save it and watch the preview in the Tutor Preview view
- [ ] Tutor Preview "edit" button opens the corresponding concept file of the currently visited Concept URL
- [ ] Tutor Preview Forward/Back/Refresh buttons work
# Actual release
- [x] release rascal project (when resolving SNAPSHOT dependencies choose the right versions of vallang etc, and make sure to bump the new rascal SNAPSHOT release one minor version)
- [x] release rascal-eclipse project (take care to choose the right release versions of typepal and rascal-core you release earlier and choose their new SNAPSHOT dependencies to the latest)
- [x] change the configuration of the stable version in `update-site-nexus-link-script/refresh-nexus-data` to the released version
- [x] test the stable update site at https://update.rascal-mpl.org/stable
- [ ] write release notes and publish on the usethesource.io blog
# Downstream implications
The following items can be executed asynchronously, but are nevertheless not to be forgotten:
- [ ] change dependencies on rascal-eclipse and rascal in rascal-eclipse-libraries and the projects it depends on
- [ ] change dependencies of typepal to latest rascal and rascal-eclipse
- [ ] change dependency of rascal-core to latest stable rascal
|
non_comp
|
version and intermediate release preliminaries every time this document says release x we mean to execute the instructions of this wiki page the current release instructions are focused on the rascal commandline tools and the eclipse ide plugin if you edit this template then please push relevant improvements to the template itself for future reference pre releasing dependent tools in unstable first a pre release of the supporting compiler typechecker tools must be done so we know we are releasing a consistently compiled standard library typepal and rascal core compile in the continuous integration environment and no tests fail release typepal make intermediate rascal release for rascal core to depend on release rascal core bump typepal and rascal core versions in rascal maven plugin to latest releases bump typepal and rascal core versions in rascal eclipse to latests snapshot releases release rascal maven plugin bump rascal maven plugin dependency in rascal and rascal eclipse project fix new errors and warnings in rascal and rascal eclipse project manual version checks continuous integration runs all unit and integration tests and fails no test maximum number of compiler warnings are resolved version numbers are verified manually manual feature tests eclipse download and install latest unstable release from update site open a rascal repl using the toolbar button can create new rascal project using the wizard can create new rascal module using the wizard can edit rascal file in rascal project save on rascal file triggers type checker rascal outline works rascal navigator works rascal navigator displays working sets rascal navigator displays interpreter s search path clicking links in repl opens editors and websites rascal on the repl import io println hello rascal in editor click on use of name jumps to definition jump to definition also works to library modules and inside library modules clicking in outline jumps to editor to right position syntax highlighting in editor works add dependency on another project by editing rascal mf required libraries lib otherproject import a module and test the type checker as well as the interpreter for correct resolution import demo lang pico plugin registerpico and test the editor of the example pico files syntax highlighting menu options open tutor view and test the search box open tutor view and test browsing the documentation import demo lang pico plugin rascal edit demo lang pico plugin not possible yet see edit a concept file save it and watch the preview in the tutor preview view tutor preview edit button opens the corresponding concept file of the currently visited concept url tutor preview forward back refresh buttons work actual release release rascal project when resolving snapshot dependencies choose the right versions of vallang etc and make sure to bump the new rascal snapshot release one minor version release rascal eclipse project take care to choose the right release versions of typepal and rascal core you release earlier and choose their new snapshot dependencies to the latest change the configuration of the stable version in update site nexus link script refresh nexus data to the released version test the stable update site at write release notes and publish on the usethesource io blog downstream implications the following items can be executed asynchronously but are nevertheless not to be forgotten change dependencies on rascal eclipse and rascal in rascal eclipse libraries and the projects it depends on change dependencies of typepal to latest rascal and rascal eclipse change dependency of rascal core to latest stable rascal
| 0
|
783,663
| 27,540,586,393
|
IssuesEvent
|
2023-03-07 08:15:31
|
openscd/open-scd
|
https://api.github.com/repos/openscd/open-scd
|
closed
|
LGOS/LSVS supervision - where should valKind and valImport be found?
|
Kind: Enhancement Priority: Urgent Priority: Important Ready for development
|
**Describe the bug**
OpenSCD will instantiate supervision based on the `valImport` and `valKind` attributes.
It looks in the DataTypeTemplates section to confirm these are correct.
It does not look at the first instantiated instance of the LGOS/LSVS. Perhaps it should?
(I apologise for the length of this issue)
For a device we have, the instantiated LN looks like the following:
```xml
<LN lnType="LGOS_4" lnClass="LGOS" inst="1" prefix="">
<Private type="GE_Digital_Energy_UR LN_UUID">Master_LGOS1</Private>
<DOI name="NamPlt">
<DAI name="d" valKind="RO" valImport="true">
<Val>Ctl</Val>
</DAI>
</DOI>
<DOI name="Beh">
<DAI name="d" valKind="RO" valImport="true">
<Val>This logical node's behaviour</Val>
</DAI>
<DAI name="stVal" valKind="RO"/>
</DOI>
<DOI name="St">
<DAI name="d" valKind="RO" valImport="true">
<Val>RxGOOSE1 On operand</Val>
</DAI>
<DAI name="stVal" valKind="RO" sAddr="0:1441793,0"/>
</DOI>
<DOI name="Alm">
<DAI name="d" valKind="RO" valImport="true">
<Val>RxGOOSE1 Off operand</Val>
</DAI>
<DAI name="stVal" valKind="RO" sAddr="0:1507329,0"/>
</DOI>
<DOI name="SimSt">
<DAI name="d" valKind="RO" valImport="true">
<Val>Status showing that Sim messages are being received and accepted for RxGOOSE1</Val>
</DAI>
<DAI name="stVal" valKind="RO" sAddr="2:288538,0"/>
</DOI>
<DOI name="LastStNum">
<DAI name="d" valKind="RO" valImport="true">
<Val>Actual value RxGOOSE1 StNum</Val>
</DAI>
<DAI name="stVal" valKind="RO" sAddr="1:316928,0"/>
<SDI name="units">
<DAI name="SIUnit" valKind="RO">
<Val></Val>
</DAI>
<DAI name="multiplier" valKind="RO">
<Val></Val>
</DAI>
</SDI>
</DOI>
<DOI name="LastSqNum">
<DAI name="d" valKind="RO" valImport="true">
<Val>Actual value RxGOOSE1 SqNum</Val>
</DAI>
<DAI name="stVal" valKind="RO" sAddr="1:316930,0"/>
<SDI name="units">
<DAI name="SIUnit" valKind="RO">
<Val></Val>
</DAI>
<DAI name="multiplier" valKind="RO">
<Val></Val>
</DAI>
</SDI>
</DOI>
<DOI name="RxConfRevNum">
<DAI name="d" valKind="RO" valImport="true">
<Val>Status of the confRev field in last accepted RxGOOSE1 message</Val>
</DAI>
<DAI name="stVal" valKind="RO" sAddr="1:316932,0"/>
<SDI name="units">
<DAI name="SIUnit" valKind="RO">
<Val></Val>
</DAI>
<DAI name="multiplier" valKind="RO">
<Val></Val>
</DAI>
</SDI>
</DOI>
<DOI name="ConfRevNum">
<DAI name="d" valKind="RO" valImport="true">
<Val>Expected ConfRev number</Val>
</DAI>
<DAI name="stVal" valKind="RO" valImport="true">
<Val>1</Val>
</DAI>
<SDI name="units">
<DAI name="SIUnit" valKind="RO">
<Val></Val>
</DAI>
<DAI name="multiplier" valKind="RO">
<Val></Val>
</DAI>
</SDI>
</DOI>
<DOI name="GoCBRef">
<DAI name="d" valKind="RO" valImport="true">
<Val>Setting RxGOOSE1 GoCBRef</Val>
</DAI>
<DAI name="setSrcRef" valKind="RO" valImport="true">
<Val></Val>
</DAI>
</DOI>
<DOI name="MAC">
<DAI name="d" valKind="RO" valImport="true">
<Val>Destination MAC address of messages subscribed to by RxGOOSE1</Val>
</DAI>
<DAI name="setVal" valKind="RO" valImport="true">
<Val>00-00-00-00-00-00</Val>
</DAI>
</DOI>
<DOI name="RgRxMode">
<DAI name="d" valKind="RO" valImport="true">
<Val>R-RxGOOSE1 RECEPTION MODE setting</Val>
</DAI>
<DAI name="setVal" valKind="RO" valImport="true">
<Val>SSM</Val>
</DAI>
</DOI>
<DOI name="RgSrcIP">
<DAI name="d" valKind="RO" valImport="true">
<Val>Source IP address of subscribed R-GOOSE message</Val>
</DAI>
<DAI name="setVal" valKind="RO" valImport="true">
<Val>127.0.0.1</Val>
</DAI>
</DOI>
<DOI name="RgDstIP">
<DAI name="d" valKind="RO" valImport="true">
<Val>Destination IP address of subscribed R-GOOSE message</Val>
</DAI>
<DAI name="setVal" valKind="RO" valImport="true">
<Val>224.0.0.0</Val>
</DAI>
</DOI>
<DOI name="SecEna">
<DAI name="d" valKind="RO" valImport="true">
<Val>R-RxGOOSE1 SECURITY setting</Val>
</DAI>
<DAI name="setVal" valKind="RO" valImport="true">
<Val>None</Val>
</DAI>
</DOI>
<DOI name="APPID">
<DAI name="setVal" valKind="RO" valImport="true">
<Val>0</Val>
</DAI>
<SDI name="units">
<DAI name="SIUnit" valKind="RO">
<Val></Val>
</DAI>
<DAI name="multiplier" valKind="RO">
<Val></Val>
</DAI>
</SDI>
<DAI name="minVal" valKind="RO">
<Val>0</Val>
</DAI>
<DAI name="maxVal" valKind="RO">
<Val>65535</Val>
</DAI>
<DAI name="stepSize" valKind="RO">
<Val>1</Val>
</DAI>
<DAI name="d" valKind="RO" valImport="true">
<Val>Setting RxGOOSE1 ETYPE APPID</Val>
</DAI>
</DOI>
<DOI name="GoDatSetRef">
<DAI name="d" valKind="RO" valImport="true">
<Val>Setting RxGOOSE1 DatSet</Val>
</DAI>
<DAI name="setSrcRef" valKind="RO" valImport="true">
<Val></Val>
</DAI>
</DOI>
<DOI name="GoID">
<DAI name="d" valKind="RO" valImport="true">
<Val>Setting RxGOOSE1 goID</Val>
</DAI>
<DAI name="setVal" valKind="RO" valImport="true">
<Val></Val>
</DAI>
</DOI>
<DOI name="RxMode">
<DAI name="d" valKind="RO" valImport="true">
<Val>RxGOOSE1 MODE setting</Val>
</DAI>
<DAI name="setVal" valKind="RO" valImport="true">
<Val>Main CPU GOOSE</Val>
</DAI>
</DOI>
</LN>
```
Whereas in the DataTypeTemplates section:
```xml
<LNodeType id="LGOS_4" lnClass="LGOS">
<DO name="Beh" type="ENS_0"/>
<DO name="NamPlt" type="LPL_1"/>
<DO name="St" type="SPS_1"/>
<DO name="Alm" type="SPS_4"/>
<DO name="SimSt" type="SPS_1"/>
<DO name="LastStNum" type="INS_0"/>
<DO name="LastSqNum" type="INS_2"/>
<DO name="RxConfRevNum" type="INS_0"/>
<DO name="ConfRevNum" type="INS_0"/>
<DO name="GoCBRef" type="ORG_1"/>
<DO name="MAC" type="VSG_0"/>
<DO name="RgRxMode" type="ENG_2"/>
<DO name="RgSrcIP" type="VSG_0"/>
<DO name="RgDstIP" type="VSG_0"/>
<DO name="SecEna" type="ENG_4"/>
<DO name="APPID" type="ING_0"/>
<DO name="GoDatSetRef" type="ORG_3"/>
<DO name="GoID" type="VSG_0"/>
<DO name="RxMode" type="ENG_9"/>
</LNodeType>
```
Where `GoCBRef` is of type `ORG_1`:
```xml
<DOType id="ORG_1" cdc="ORG">
<DA name="setSrcRef" fc="SP" dchg="true" bType="ObjRef"/>
<DA name="d" fc="DC" bType="VisString255"/>
</DOType>
```
In the datatypes `valImport` and `valKind` are not set.
In the first instantiated LGOS these are set
I am not sure, but I think OpenSCD should allow for `valImport` or `valKind` being _either_ in the first defined instance in the IED, or declared against the DA in the `DOType` declaration. This would follow a normal interpretation along the lines of:
1. Templates are always instantiation of types.
2. The engineering process requires at least one instance to be instantiated.
3. If the first instance has the appropriate `valImport` and `valKind` then they should be allowed to be configured in any copies.
As far as I can tell, IEC 61850-7-1 Ed 2.1, Annex G is not very specific about this and I have not found a tissue which gives more clarity.
What I am proposing which is different to #942 is that the check should be:
> check if the IED allows instantiating values DataTypeTemplates>LNodeType[lnClass="LGOS"]>DO[name="GoCBRef"]>DA[name="setSrcRef"] has attributes valKind=Conf/RO and valImport=true _OR check if the subscriber IED has instantiated logical node LN[lnClass="LGOS"] and examine first instance in the file for: LN[lnClass="LGOS"]>DOI[name="GoCBRef"]>DAI[name="setSrcRef"]. If (`valKind="RO"` | `valKind="Conf"` AND `valImport="true"`)_.
If the check is passed, then when we instantiate the LGOS/LSVS (example for LGOS) we copy the `valKind` and `valImport` attributes from the first instance as shown in the example below:
```xml
<LN lnClass="LGOS" lnType="LGOS_4" inst="13">
<Private type="OpenSCD.create" />
<DOI name="GoCBRef">
<DAI name="setSrcRef" valImport="true" valKind="Conf">
<Val>NRR_PCS221S_001MUGO/LLN0.Ind</Val>
</DAI>
</DOI>
</LN>
```
**Interpreting the Standard...**
There is an argument that perhaps a different workflow is anticipated for whether or not the IED has a fixed or flexible LGOS/LSVS allocation. Examining Figure G.1 and Figure G.2 of IEC 61850-7-1 Ed 2.1, it may appear to anticipate finding the `valImport` and `valKind` attributes against the instantiated instance for a fixed data model:

Arguably for a fixed data model, checking the instances makes sense and for a flexible data model where new instances can be created, checking the datatypes makes more sense. But perhaps we could do both without terrible harm? i.e. If it's in the DataTypeTemplates, use that, if it's missing use the first instance. If there's a conflict, use the first instance.
**References**
* https://iec61850.tissue-db.com/tissue/1396
* https://github.com/openscd/open-scd/issues/942
* See the attached ICD file [B30.zip](https://github.com/openscd/open-scd/files/10671416/B30.zip)
|
2.0
|
LGOS/LSVS supervision - where should valKind and valImport be found? - **Describe the bug**
OpenSCD will instantiate supervision based on the `valImport` and `valKind` attributes.
It looks in the DataTypeTemplates section to confirm these are correct.
It does not look at the first instantiated instance of the LGOS/LSVS. Perhaps it should?
(I apologise for the length of this issue)
For a device we have, the instantiated LN looks like the following:
```xml
<LN lnType="LGOS_4" lnClass="LGOS" inst="1" prefix="">
<Private type="GE_Digital_Energy_UR LN_UUID">Master_LGOS1</Private>
<DOI name="NamPlt">
<DAI name="d" valKind="RO" valImport="true">
<Val>Ctl</Val>
</DAI>
</DOI>
<DOI name="Beh">
<DAI name="d" valKind="RO" valImport="true">
<Val>This logical node's behaviour</Val>
</DAI>
<DAI name="stVal" valKind="RO"/>
</DOI>
<DOI name="St">
<DAI name="d" valKind="RO" valImport="true">
<Val>RxGOOSE1 On operand</Val>
</DAI>
<DAI name="stVal" valKind="RO" sAddr="0:1441793,0"/>
</DOI>
<DOI name="Alm">
<DAI name="d" valKind="RO" valImport="true">
<Val>RxGOOSE1 Off operand</Val>
</DAI>
<DAI name="stVal" valKind="RO" sAddr="0:1507329,0"/>
</DOI>
<DOI name="SimSt">
<DAI name="d" valKind="RO" valImport="true">
<Val>Status showing that Sim messages are being received and accepted for RxGOOSE1</Val>
</DAI>
<DAI name="stVal" valKind="RO" sAddr="2:288538,0"/>
</DOI>
<DOI name="LastStNum">
<DAI name="d" valKind="RO" valImport="true">
<Val>Actual value RxGOOSE1 StNum</Val>
</DAI>
<DAI name="stVal" valKind="RO" sAddr="1:316928,0"/>
<SDI name="units">
<DAI name="SIUnit" valKind="RO">
<Val></Val>
</DAI>
<DAI name="multiplier" valKind="RO">
<Val></Val>
</DAI>
</SDI>
</DOI>
<DOI name="LastSqNum">
<DAI name="d" valKind="RO" valImport="true">
<Val>Actual value RxGOOSE1 SqNum</Val>
</DAI>
<DAI name="stVal" valKind="RO" sAddr="1:316930,0"/>
<SDI name="units">
<DAI name="SIUnit" valKind="RO">
<Val></Val>
</DAI>
<DAI name="multiplier" valKind="RO">
<Val></Val>
</DAI>
</SDI>
</DOI>
<DOI name="RxConfRevNum">
<DAI name="d" valKind="RO" valImport="true">
<Val>Status of the confRev field in last accepted RxGOOSE1 message</Val>
</DAI>
<DAI name="stVal" valKind="RO" sAddr="1:316932,0"/>
<SDI name="units">
<DAI name="SIUnit" valKind="RO">
<Val></Val>
</DAI>
<DAI name="multiplier" valKind="RO">
<Val></Val>
</DAI>
</SDI>
</DOI>
<DOI name="ConfRevNum">
<DAI name="d" valKind="RO" valImport="true">
<Val>Expected ConfRev number</Val>
</DAI>
<DAI name="stVal" valKind="RO" valImport="true">
<Val>1</Val>
</DAI>
<SDI name="units">
<DAI name="SIUnit" valKind="RO">
<Val></Val>
</DAI>
<DAI name="multiplier" valKind="RO">
<Val></Val>
</DAI>
</SDI>
</DOI>
<DOI name="GoCBRef">
<DAI name="d" valKind="RO" valImport="true">
<Val>Setting RxGOOSE1 GoCBRef</Val>
</DAI>
<DAI name="setSrcRef" valKind="RO" valImport="true">
<Val></Val>
</DAI>
</DOI>
<DOI name="MAC">
<DAI name="d" valKind="RO" valImport="true">
<Val>Destination MAC address of messages subscribed to by RxGOOSE1</Val>
</DAI>
<DAI name="setVal" valKind="RO" valImport="true">
<Val>00-00-00-00-00-00</Val>
</DAI>
</DOI>
<DOI name="RgRxMode">
<DAI name="d" valKind="RO" valImport="true">
<Val>R-RxGOOSE1 RECEPTION MODE setting</Val>
</DAI>
<DAI name="setVal" valKind="RO" valImport="true">
<Val>SSM</Val>
</DAI>
</DOI>
<DOI name="RgSrcIP">
<DAI name="d" valKind="RO" valImport="true">
<Val>Source IP address of subscribed R-GOOSE message</Val>
</DAI>
<DAI name="setVal" valKind="RO" valImport="true">
<Val>127.0.0.1</Val>
</DAI>
</DOI>
<DOI name="RgDstIP">
<DAI name="d" valKind="RO" valImport="true">
<Val>Destination IP address of subscribed R-GOOSE message</Val>
</DAI>
<DAI name="setVal" valKind="RO" valImport="true">
<Val>224.0.0.0</Val>
</DAI>
</DOI>
<DOI name="SecEna">
<DAI name="d" valKind="RO" valImport="true">
<Val>R-RxGOOSE1 SECURITY setting</Val>
</DAI>
<DAI name="setVal" valKind="RO" valImport="true">
<Val>None</Val>
</DAI>
</DOI>
<DOI name="APPID">
<DAI name="setVal" valKind="RO" valImport="true">
<Val>0</Val>
</DAI>
<SDI name="units">
<DAI name="SIUnit" valKind="RO">
<Val></Val>
</DAI>
<DAI name="multiplier" valKind="RO">
<Val></Val>
</DAI>
</SDI>
<DAI name="minVal" valKind="RO">
<Val>0</Val>
</DAI>
<DAI name="maxVal" valKind="RO">
<Val>65535</Val>
</DAI>
<DAI name="stepSize" valKind="RO">
<Val>1</Val>
</DAI>
<DAI name="d" valKind="RO" valImport="true">
<Val>Setting RxGOOSE1 ETYPE APPID</Val>
</DAI>
</DOI>
<DOI name="GoDatSetRef">
<DAI name="d" valKind="RO" valImport="true">
<Val>Setting RxGOOSE1 DatSet</Val>
</DAI>
<DAI name="setSrcRef" valKind="RO" valImport="true">
<Val></Val>
</DAI>
</DOI>
<DOI name="GoID">
<DAI name="d" valKind="RO" valImport="true">
<Val>Setting RxGOOSE1 goID</Val>
</DAI>
<DAI name="setVal" valKind="RO" valImport="true">
<Val></Val>
</DAI>
</DOI>
<DOI name="RxMode">
<DAI name="d" valKind="RO" valImport="true">
<Val>RxGOOSE1 MODE setting</Val>
</DAI>
<DAI name="setVal" valKind="RO" valImport="true">
<Val>Main CPU GOOSE</Val>
</DAI>
</DOI>
</LN>
```
Whereas in the DataTypeTemplates section:
```xml
<LNodeType id="LGOS_4" lnClass="LGOS">
<DO name="Beh" type="ENS_0"/>
<DO name="NamPlt" type="LPL_1"/>
<DO name="St" type="SPS_1"/>
<DO name="Alm" type="SPS_4"/>
<DO name="SimSt" type="SPS_1"/>
<DO name="LastStNum" type="INS_0"/>
<DO name="LastSqNum" type="INS_2"/>
<DO name="RxConfRevNum" type="INS_0"/>
<DO name="ConfRevNum" type="INS_0"/>
<DO name="GoCBRef" type="ORG_1"/>
<DO name="MAC" type="VSG_0"/>
<DO name="RgRxMode" type="ENG_2"/>
<DO name="RgSrcIP" type="VSG_0"/>
<DO name="RgDstIP" type="VSG_0"/>
<DO name="SecEna" type="ENG_4"/>
<DO name="APPID" type="ING_0"/>
<DO name="GoDatSetRef" type="ORG_3"/>
<DO name="GoID" type="VSG_0"/>
<DO name="RxMode" type="ENG_9"/>
</LNodeType>
```
Where `GoCBRef` is of type `ORG_1`:
```xml
<DOType id="ORG_1" cdc="ORG">
<DA name="setSrcRef" fc="SP" dchg="true" bType="ObjRef"/>
<DA name="d" fc="DC" bType="VisString255"/>
</DOType>
```
In the datatypes `valImport` and `valKind` are not set.
In the first instantiated LGOS these are set
I am not sure, but I think OpenSCD should allow for `valImport` or `valKind` being _either_ in the first defined instance in the IED, or declared against the DA in the `DOType` declaration. This would follow a normal interpretation along the lines of:
1. Templates are always instantiation of types.
2. The engineering process requires at least one instance to be instantiated.
3. If the first instance has the appropriate `valImport` and `valKind` then they should be allowed to be configured in any copies.
As far as I can tell, IEC 61850-7-1 Ed 2.1, Annex G is not very specific about this and I have not found a tissue which gives more clarity.
What I am proposing which is different to #942 is that the check should be:
> check if the IED allows instantiating values DataTypeTemplates>LNodeType[lnClass="LGOS"]>DO[name="GoCBRef"]>DA[name="setSrcRef"] has attributes valKind=Conf/RO and valImport=true _OR check if the subscriber IED has instantiated logical node LN[lnClass="LGOS"] and examine first instance in the file for: LN[lnClass="LGOS"]>DOI[name="GoCBRef"]>DAI[name="setSrcRef"]. If (`valKind="RO"` | `valKind="Conf"` AND `valImport="true"`)_.
If the check is passed, then when we instantiate the LGOS/LSVS (example for LGOS) we copy the `valKind` and `valImport` attributes from the first instance as shown in the example below:
```xml
<LN lnClass="LGOS" lnType="LGOS_4" inst="13">
<Private type="OpenSCD.create" />
<DOI name="GoCBRef">
<DAI name="setSrcRef" valImport="true" valKind="Conf">
<Val>NRR_PCS221S_001MUGO/LLN0.Ind</Val>
</DAI>
</DOI>
</LN>
```
**Interpreting the Standard...**
There is an argument that perhaps a different workflow is anticipated for whether or not the IED has a fixed or flexible LGOS/LSVS allocation. Examining Figure G.1 and Figure G.2 of IEC 61850-7-1 Ed 2.1, it may appear to anticipate finding the `valImport` and `valKind` attributes against the instantiated instance for a fixed data model:

Arguably for a fixed data model, checking the instances makes sense and for a flexible data model where new instances can be created, checking the datatypes makes more sense. But perhaps we could do both without terrible harm? i.e. If it's in the DataTypeTemplates, use that, if it's missing use the first instance. If there's a conflict, use the first instance.
**References**
* https://iec61850.tissue-db.com/tissue/1396
* https://github.com/openscd/open-scd/issues/942
* See the attached ICD file [B30.zip](https://github.com/openscd/open-scd/files/10671416/B30.zip)
|
non_comp
|
lgos lsvs supervision where should valkind and valimport be found describe the bug openscd will instantiate supervision based on the valimport and valkind attributes it looks in the datatypetemplates section to confirm these are correct it does not look at the first instantiated instance of the lgos lsvs perhaps it should i apologise for the length of this issue for a device we have the instantiated ln looks like the following xml master ctl this logical node s behaviour on operand off operand status showing that sim messages are being received and accepted for actual value stnum actual value sqnum status of the confrev field in last accepted message expected confrev number setting gocbref destination mac address of messages subscribed to by r reception mode setting ssm source ip address of subscribed r goose message destination ip address of subscribed r goose message r security setting none setting etype appid setting datset setting goid mode setting main cpu goose whereas in the datatypetemplates section xml where gocbref is of type org xml in the datatypes valimport and valkind are not set in the first instantiated lgos these are set i am not sure but i think openscd should allow for valimport or valkind being either in the first defined instance in the ied or declared against the da in the dotype declaration this would follow a normal interpretation along the lines of templates are always instantiation of types the engineering process requires at least one instance to be instantiated if the first instance has the appropriate valimport and valkind then they should be allowed to be configured in any copies as far as i can tell iec ed annex g is not very specific about this and i have not found a tissue which gives more clarity what i am proposing which is different to is that the check should be check if the ied allows instantiating values datatypetemplates lnodetype do da has attributes valkind conf ro and valimport true or check if the subscriber ied has instantiated logical node ln and examine first instance in the file for ln doi dai if valkind ro valkind conf and valimport true if the check is passed then when we instantiate the lgos lsvs example for lgos we copy the valkind and valimport attributes from the first instance as shown in the example below xml nrr ind interpreting the standard there is an argument that perhaps a different workflow is anticipated for whether or not the ied has a fixed or flexible lgos lsvs allocation examining figure g and figure g of iec ed it may appear to anticipate finding the valimport and valkind attributes against the instantiated instance for a fixed data model arguably for a fixed data model checking the instances makes sense and for a flexible data model where new instances can be created checking the datatypes makes more sense but perhaps we could do both without terrible harm i e if it s in the datatypetemplates use that if it s missing use the first instance if there s a conflict use the first instance references see the attached icd file
| 0
|
61,877
| 15,099,334,126
|
IssuesEvent
|
2021-02-08 02:14:51
|
NVIDIA/TensorRT
|
https://api.github.com/repos/NVIDIA/TensorRT
|
closed
|
Build tensorRT OSS successful but do not have any symbol plugin in library file.
|
Component: OSS Build triaged
|
## Description
Hello I follow build TensorRT from source, but maybe have something miss take need help:
Check symbol plugin exist with command
```
nm -gDC file/libnvinfer_plugin.so
```
Bellow the step I build source.
add some PATH default:
```
export TRT_SOURCE=`pwd`
export TRT_RELEASE=`pwd`/release
```
make build folder and do cmake file
```
mkdir build && cd build
cmake .. -DTRT_LIB_DIR=$TRT_RELEASE/lib -DTRT_OUT_DIR=`pwd`/out -DBUILD_PLUGINS=ON -DBUILD_SAMPLES=OFF -DBUILD_PARSERS=OFF -DCUDA_VERSION=10.2
make -j12
```
output here:
```{r, attr.output='style="max-height: 100px;"'}
Building for TensorRT version: 7.1.3, library version: 7
-- Targeting TRT Platform: x86_64
-- CUDA version set to 10.2
-- cuDNN version set to 8.0
-- Protobuf version set to 3.0.0
-- ========================= Importing and creating target nvinfer ==========================
-- Looking for library nvinfer
-- Library that was found /usr/lib/x86_64-linux-gnu/libnvinfer.so
-- ==========================================================================================
-- ========================= Importing and creating target nvuffparser ==========================
-- Looking for library nvparsers
-- Library that was found /usr/lib/x86_64-linux-gnu/libnvparsers.so
-- ==========================================================================================
CMake Warning at CMakeLists.txt:157 (message):
Detected CUDA version is < 11.0. SM80 not supported.
-- GPU_ARCHS is not defined. Generating CUDA code for default SMs: 35;53;61;70;75
-- ========================= Importing and creating target nvcaffeparser ==========================
-- Looking for library nvparsers
-- Library that was found /usr/lib/x86_64-linux-gnu/libnvparsers.so
-- ==========================================================================================
-- ========================= Importing and creating target nvonnxparser ==========================
-- Looking for library nvonnxparser
-- Library that was found /usr/lib/x86_64-linux-gnu/libnvonnxparser.so
-- ==========================================================================================
-- Configuring done
-- Generating done
-- Build files have been written to: /workspace/tensorRT/version-7.1/TensorRT/build
```
```{r, attr.output='style="max-height: 100px;"'}
[ 0%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/nmsPlugin/nmsPlugin.cpp.o
[ 0%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/reorgPlugin/reorgPlugin.cpp.o
[ 1%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/batchedNMSPlugin/batchedNMSPlugin.cpp.o
[ 2%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/gridAnchorPlugin/gridAnchorPlugin.cpp.o
[ 2%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/nvFasterRCNN/nvFasterRCNNPlugin.cpp.o
[ 4%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/normalizePlugin/normalizePlugin.cpp.o
[ 4%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/regionPlugin/regionPlugin.cpp.o
[ 5%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/priorBoxPlugin/priorBoxPlugin.cpp.o
[ 5%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/nmsPlugin/nmsPlugin.cpp.o
[ 6%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/priorBoxPlugin/priorBoxPlugin.cpp.o
[ 7%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/flattenConcat/flattenConcat.cpp.o
[ 8%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/normalizePlugin/normalizePlugin.cpp.o
[ 8%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/cropAndResizePlugin/cropAndResizePlugin.cpp.o
[ 9%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/proposalPlugin/proposalPlugin.cpp.o
[ 9%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/reorgPlugin/reorgPlugin.cpp.o
[ 10%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/batchTilePlugin/batchTilePlugin.cpp.o
[ 10%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/detectionLayerPlugin/detectionLayerPlugin.cpp.o
[ 11%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/gridAnchorPlugin/gridAnchorPlugin.cpp.o
[ 12%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/proposalLayerPlugin/proposalLayerPlugin.cpp.o
[ 13%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/pyramidROIAlignPlugin/pyramidROIAlignPlugin.cpp.o
[ 14%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/regionPlugin/regionPlugin.cpp.o
[ 14%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/generateDetectionPlugin/generateDetectionPlugin.cpp.o
[ 15%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/multilevelProposeROI/multilevelProposeROIPlugin.cpp.o
[ 16%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/multilevelCropAndResizePlugin/multilevelCropAndResizePlugin.cpp.o
[ 16%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/resizeNearestPlugin/resizeNearestPlugin.cpp.o
[ 17%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/specialSlicePlugin/specialSlicePlugin.cpp.o
[ 17%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/nvFasterRCNN/nvFasterRCNNPlugin.cpp.o
[ 18%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/batchedNMSPlugin/batchedNMSPlugin.cpp.o
[ 19%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/instanceNormalizationPlugin/instanceNormalizationPlugin.cpp.o
/workspace/tensorRT/version-7.1/TensorRT/plugin/multilevelProposeROI/multilevelProposeROIPlugin.cpp: In member function 'virtual int nvinfer1::plugin::MultilevelProposeROI::enqueue(int, const void* const*, void**, void*, cudaStream_t)':
/workspace/tensorRT/version-7.1/TensorRT/plugin/multilevelProposeROI/multilevelProposeROIPlugin.cpp:395:44: warning: pointer of type 'void *' used in arithmetic [-Wpointer-arith]
mParam, proposal_ws, workspace + kernel_workspace_offset,
~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~
/workspace/tensorRT/version-7.1/TensorRT/plugin/multilevelProposeROI/multilevelProposeROIPlugin.cpp:408:19: warning: pointer of type 'void *' used in arithmetic [-Wpointer-arith]
workspace + kernel_workspace_offset, ctopk_ws, reinterpret_cast<void**>(mDeviceScores),
~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~
[ 19%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/groupNormalizationPlugin/groupNormalizationKernel.cu.o
[ 20%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/flattenConcat/flattenConcat.cpp.o
[ 21%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/groupNormalizationPlugin/groupNormalizationPlugin.cpp.o
[ 22%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/coordConvACPlugin/coordConvACPlugin.cpp.o
[ 23%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/fcPlugin/fcPlugin.cpp.o
[ 23%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/embLayerNormPlugin/embLayerNormPlugin.cpp.o
[ 23%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/cropAndResizePlugin/cropAndResizePlugin.cpp.o
[ 24%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/geluPlugin/geluPlugin.cpp.o
[ 25%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/proposalPlugin/proposalPlugin.cpp.o
[ 25%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/bertQKVToContextPlugin/fused_multihead_attention_fp16_128_64_kernel.sm75.cpp.o
[ 26%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/batchTilePlugin/batchTilePlugin.cpp.o
[ 26%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/detectionLayerPlugin/detectionLayerPlugin.cpp.o
[ 27%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/bertQKVToContextPlugin/fused_multihead_attention_fp16_128_64_kernel.sm80.cpp.o
[ 28%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/proposalLayerPlugin/proposalLayerPlugin.cpp.o
[ 29%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/bertQKVToContextPlugin/fused_multihead_attention_fp16_384_64_kernel.sm75.cpp.o
[ 29%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/bertQKVToContextPlugin/fused_multihead_attention_fp16_384_64_kernel.sm80.cpp.o
[ 30%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/bertQKVToContextPlugin/fused_multihead_attention_int8_128_64_kernel.sm75.cpp.o
[ 31%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/bertQKVToContextPlugin/fused_multihead_attention_int8_128_64_kernel.sm80.cpp.o
[ 32%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/pyramidROIAlignPlugin/pyramidROIAlignPlugin.cpp.o
[ 32%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/bertQKVToContextPlugin/fused_multihead_attention_int8_384_64_kernel.sm75.cpp.o
[ 32%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/generateDetectionPlugin/generateDetectionPlugin.cpp.o
[ 33%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/bertQKVToContextPlugin/fused_multihead_attention_int8_384_64_kernel.sm80.cpp.o
[ 34%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/multilevelProposeROI/multilevelProposeROIPlugin.cpp.o
[ 35%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/bertQKVToContextPlugin/qkvToContextPlugin.cpp.o
[ 35%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/skipLayerNormPlugin/skipLayerNormPlugin.cpp.o
[ 36%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/kernel.cpp.o
[ 37%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/multilevelCropAndResizePlugin/multilevelCropAndResizePlugin.cpp.o
[ 38%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/common/checkMacrosPlugin.cpp.o
[ 38%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/resizeNearestPlugin/resizeNearestPlugin.cpp.o
/workspace/tensorRT/version-7.1/TensorRT/plugin/multilevelProposeROI/multilevelProposeROIPlugin.cpp: In member function 'virtual int nvinfer1::plugin::MultilevelProposeROI::enqueue(int, const void* const*, void**, void*, cudaStream_t)':
/workspace/tensorRT/version-7.1/TensorRT/plugin/multilevelProposeROI/multilevelProposeROIPlugin.cpp:395:44: warning: pointer of type 'void *' used in arithmetic [-Wpointer-arith]
mParam, proposal_ws, workspace + kernel_workspace_offset,
~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~
/workspace/tensorRT/version-7.1/TensorRT/plugin/multilevelProposeROI/multilevelProposeROIPlugin.cpp:408:19: warning: pointer of type 'void *' used in arithmetic [-Wpointer-arith]
workspace + kernel_workspace_offset, ctopk_ws, reinterpret_cast<void**>(mDeviceScores),
~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~
[ 39%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/specialSlicePlugin/specialSlicePlugin.cpp.o
[ 39%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/common/cudaDriverWrapper.cpp.o
[ 40%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/common/nmsHelper.cpp.o
[ 41%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/common/reducedMathPlugin.cpp.o
[ 42%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/instanceNormalizationPlugin/instanceNormalizationPlugin.cpp.o
[ 42%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/batchedNMSPlugin/batchedNMSInference.cu.o
[ 43%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/batchedNMSPlugin/gatherNMSOutputs.cu.o
[ 43%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin_static.dir/groupNormalizationPlugin/groupNormalizationKernel.cu.o
[ 44%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/coordConvACPlugin/coordConvACPlugin.cu.o
[ 44%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/allClassNMS.cu.o
[ 45%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/groupNormalizationPlugin/groupNormalizationPlugin.cpp.o
[ 46%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/bboxDeltas2Proposals.cu.o
[ 47%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/coordConvACPlugin/coordConvACPlugin.cpp.o
[ 48%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/common.cu.o
[ 48%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/cropAndResizeKernel.cu.o
[ 48%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/embLayerNormPlugin/embLayerNormPlugin.cpp.o
/workspace/tensorRT/version-7.1/TensorRT/plugin/batchedNMSPlugin/gatherNMSOutputs.cu(82): warning: argument is incompatible with corresponding format string conversion
detected during:
instantiation of "void gatherNMSOutputs_kernel<T_BBOX,T_SCORE,nthds_per_cta>(__nv_bool, int, int, int, int, int, const int *, const T_SCORE *, const T_BBOX *, int *, T_BBOX *, T_BBOX *, T_BBOX *, __nv_bool) [with T_BBOX=float, T_SCORE=float, nthds_per_cta=32U]"
(117): here
instantiation of "pluginStatus_t gatherNMSOutputs_gpu<T_BBOX,T_SCORE>(cudaStream_t, __nv_bool, int, int, int, int, int, const void *, const void *, const void *, void *, void *, void *, void *, __nv_bool) [with T_BBOX=float, T_SCORE=float]"
(165): here
[ 49%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/fcPlugin/fcPlugin.cpp.o
[ 50%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/decodeBBoxes.cu.o
[ 51%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/geluPlugin/geluPlugin.cpp.o
/workspace/tensorRT/version-7.1/TensorRT/plugin/batchedNMSPlugin/gatherNMSOutputs.cu(82): warning: argument is incompatible with corresponding format string conversion
detected during:
instantiation of "void gatherNMSOutputs_kernel<T_BBOX,T_SCORE,nthds_per_cta>(__nv_bool, int, int, int, int, int, const int *, const T_SCORE *, const T_BBOX *, int *, T_BBOX *, T_BBOX *, T_BBOX *, __nv_bool) [with T_BBOX=float, T_SCORE=float, nthds_per_cta=32U]"
(117): here
instantiation of "pluginStatus_t gatherNMSOutputs_gpu<T_BBOX,T_SCORE>(cudaStream_t, __nv_bool, int, int, int, int, int, const void *, const void *, const void *, void *, void *, void *, void *, __nv_bool) [with T_BBOX=float, T_SCORE=float]"
(165): here
/workspace/tensorRT/version-7.1/TensorRT/plugin/batchedNMSPlugin/gatherNMSOutputs.cu(82): warning: argument is incompatible with corresponding format string conversion
detected during:
instantiation of "void gatherNMSOutputs_kernel<T_BBOX,T_SCORE,nthds_per_cta>(__nv_bool, int, int, int, int, int, const int *, const T_SCORE *, const T_BBOX *, int *, T_BBOX *, T_BBOX *, T_BBOX *, __nv_bool) [with T_BBOX=float, T_SCORE=float, nthds_per_cta=32U]"
(117): here
instantiation of "pluginStatus_t gatherNMSOutputs_gpu<T_BBOX,T_SCORE>(cudaStream_t, __nv_bool, int, int, int, int, int, const void *, const void *, const void *, void *, void *, void *, void *, __nv_bool) [with T_BBOX=float, T_SCORE=float]"
(165): here
[ 51%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/bertQKVToContextPlugin/fused_multihead_attention_fp16_128_64_kernel.sm75.cpp.o
[ 52%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/bertQKVToContextPlugin/fused_multihead_attention_fp16_128_64_kernel.sm80.cpp.o
[ 53%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/bertQKVToContextPlugin/fused_multihead_attention_fp16_384_64_kernel.sm75.cpp.o
[ 53%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/bertQKVToContextPlugin/fused_multihead_attention_fp16_384_64_kernel.sm80.cpp.o
[ 54%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/bertQKVToContextPlugin/fused_multihead_attention_int8_128_64_kernel.sm75.cpp.o
[ 55%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/bertQKVToContextPlugin/fused_multihead_attention_int8_128_64_kernel.sm80.cpp.o
[ 55%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/bertQKVToContextPlugin/fused_multihead_attention_int8_384_64_kernel.sm75.cpp.o
[ 56%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/bertQKVToContextPlugin/fused_multihead_attention_int8_384_64_kernel.sm80.cpp.o
[ 57%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/bertQKVToContextPlugin/qkvToContextPlugin.cpp.o
[ 57%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/skipLayerNormPlugin/skipLayerNormPlugin.cpp.o
[ 58%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/common/kernels/kernel.cpp.o
[ 59%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/common/checkMacrosPlugin.cpp.o
/workspace/tensorRT/version-7.1/TensorRT/plugin/batchedNMSPlugin/gatherNMSOutputs.cu(82): warning: argument is incompatible with corresponding format string conversion
detected during:
instantiation of "void gatherNMSOutputs_kernel<T_BBOX,T_SCORE,nthds_per_cta>(__nv_bool, int, int, int, int, int, const int *, const T_SCORE *, const T_BBOX *, int *, T_BBOX *, T_BBOX *, T_BBOX *, __nv_bool) [with T_BBOX=float, T_SCORE=float, nthds_per_cta=32U]"
(117): here
instantiation of "pluginStatus_t gatherNMSOutputs_gpu<T_BBOX,T_SCORE>(cudaStream_t, __nv_bool, int, int, int, int, int, const void *, const void *, const void *, void *, void *, void *, void *, __nv_bool) [with T_BBOX=float, T_SCORE=float]"
(165): here
[ 59%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/common/cudaDriverWrapper.cpp.o
[ 60%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/detectionForward.cu.o
[ 61%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/common/nmsHelper.cpp.o
[ 62%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/common/reducedMathPlugin.cpp.o
[ 62%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin_static.dir/batchedNMSPlugin/batchedNMSInference.cu.o
/workspace/tensorRT/version-7.1/TensorRT/plugin/batchedNMSPlugin/gatherNMSOutputs.cu(82): warning: argument is incompatible with corresponding format string conversion
detected during:
instantiation of "void gatherNMSOutputs_kernel<T_BBOX,T_SCORE,nthds_per_cta>(__nv_bool, int, int, int, int, int, const int *, const T_SCORE *, const T_BBOX *, int *, T_BBOX *, T_BBOX *, T_BBOX *, __nv_bool) [with T_BBOX=float, T_SCORE=float, nthds_per_cta=32U]"
(117): here
instantiation of "pluginStatus_t gatherNMSOutputs_gpu<T_BBOX,T_SCORE>(cudaStream_t, __nv_bool, int, int, int, int, int, const void *, const void *, const void *, void *, void *, void *, void *, __nv_bool) [with T_BBOX=float, T_SCORE=float]"
(165): here
[ 63%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin_static.dir/batchedNMSPlugin/gatherNMSOutputs.cu.o
[ 64%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin_static.dir/coordConvACPlugin/coordConvACPlugin.cu.o
[ 64%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin_static.dir/common/kernels/allClassNMS.cu.o
/workspace/tensorRT/version-7.1/TensorRT/plugin/batchedNMSPlugin/gatherNMSOutputs.cu(82): warning: argument is incompatible with corresponding format string conversion
detected during:
instantiation of "void gatherNMSOutputs_kernel<T_BBOX,T_SCORE,nthds_per_cta>(__nv_bool, int, int, int, int, int, const int *, const T_SCORE *, const T_BBOX *, int *, T_BBOX *, T_BBOX *, T_BBOX *, __nv_bool) [with T_BBOX=float, T_SCORE=float, nthds_per_cta=32U]"
(117): here
instantiation of "pluginStatus_t gatherNMSOutputs_gpu<T_BBOX,T_SCORE>(cudaStream_t, __nv_bool, int, int, int, int, int, const void *, const void *, const void *, void *, void *, void *, void *, __nv_bool) [with T_BBOX=float, T_SCORE=float]"
(165): here
[ 64%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/extractFgScores.cu.o
/workspace/tensorRT/version-7.1/TensorRT/plugin/batchedNMSPlugin/gatherNMSOutputs.cu(82): warning: argument is incompatible with corresponding format string conversion
detected during:
instantiation of "void gatherNMSOutputs_kernel<T_BBOX,T_SCORE,nthds_per_cta>(__nv_bool, int, int, int, int, int, const int *, const T_SCORE *, const T_BBOX *, int *, T_BBOX *, T_BBOX *, T_BBOX *, __nv_bool) [with T_BBOX=float, T_SCORE=float, nthds_per_cta=32U]"
(117): here
instantiation of "pluginStatus_t gatherNMSOutputs_gpu<T_BBOX,T_SCORE>(cudaStream_t, __nv_bool, int, int, int, int, int, const void *, const void *, const void *, void *, void *, void *, void *, __nv_bool) [with T_BBOX=float, T_SCORE=float]"
(165): here
[ 65%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/gatherTopDetections.cu.o
/workspace/tensorRT/version-7.1/TensorRT/plugin/batchedNMSPlugin/gatherNMSOutputs.cu(82): warning: argument is incompatible with corresponding format string conversion
detected during:
instantiation of "void gatherNMSOutputs_kernel<T_BBOX,T_SCORE,nthds_per_cta>(__nv_bool, int, int, int, int, int, const int *, const T_SCORE *, const T_BBOX *, int *, T_BBOX *, T_BBOX *, T_BBOX *, __nv_bool) [with T_BBOX=float, T_SCORE=float, nthds_per_cta=32U]"
(117): here
instantiation of "pluginStatus_t gatherNMSOutputs_gpu<T_BBOX,T_SCORE>(cudaStream_t, __nv_bool, int, int, int, int, int, const void *, const void *, const void *, void *, void *, void *, void *, __nv_bool) [with T_BBOX=float, T_SCORE=float]"
(165): here
[ 66%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/generateAnchors.cu.o
[ 66%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/gridAnchorLayer.cu.o
[ 67%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/maskRCNNKernels.cu.o
/workspace/tensorRT/version-7.1/TensorRT/plugin/batchedNMSPlugin/gatherNMSOutputs.cu(82): warning: argument is incompatible with corresponding format string conversion
detected during:
instantiation of "void gatherNMSOutputs_kernel<T_BBOX,T_SCORE,nthds_per_cta>(__nv_bool, int, int, int, int, int, const int *, const T_SCORE *, const T_BBOX *, int *, T_BBOX *, T_BBOX *, T_BBOX *, __nv_bool) [with T_BBOX=float, T_SCORE=float, nthds_per_cta=32U]"
(117): here
instantiation of "pluginStatus_t gatherNMSOutputs_gpu<T_BBOX,T_SCORE>(cudaStream_t, __nv_bool, int, int, int, int, int, const void *, const void *, const void *, void *, void *, void *, void *, __nv_bool) [with T_BBOX=float, T_SCORE=float]"
(165): here
/workspace/tensorRT/version-7.1/TensorRT/plugin/batchedNMSPlugin/gatherNMSOutputs.cu(82): warning: argument is incompatible with corresponding format string conversion
detected during:
instantiation of "void gatherNMSOutputs_kernel<T_BBOX,T_SCORE,nthds_per_cta>(__nv_bool, int, int, int, int, int, const int *, const T_SCORE *, const T_BBOX *, int *, T_BBOX *, T_BBOX *, T_BBOX *, __nv_bool) [with T_BBOX=float, T_SCORE=float, nthds_per_cta=32U]"
(117): here
instantiation of "pluginStatus_t gatherNMSOutputs_gpu<T_BBOX,T_SCORE>(cudaStream_t, __nv_bool, int, int, int, int, int, const void *, const void *, const void *, void *, void *, void *, void *, __nv_bool) [with T_BBOX=float, T_SCORE=float]"
(165): here
[ 68%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/nmsLayer.cu.o
[ 69%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin_static.dir/common/kernels/bboxDeltas2Proposals.cu.o
[ 70%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin_static.dir/common/kernels/common.cu.o
[ 70%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/normalizeLayer.cu.o
[ 71%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/permuteData.cu.o
[ 72%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/priorBoxLayer.cu.o
[ 72%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/proposalKernel.cu.o
[ 73%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/proposalsForward.cu.o
[ 74%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/regionForward.cu.o
/workspace/tensorRT/version-7.1/TensorRT/plugin/common/kernels/proposalKernel.cu(34): warning: variable "ALIGNMENT" was declared but never referenced
[ 74%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/reorgForward.cu.o
/workspace/tensorRT/version-7.1/TensorRT/plugin/common/kernels/proposalKernel.cu(34): warning: variable "ALIGNMENT" was declared but never referenced
/workspace/tensorRT/version-7.1/TensorRT/plugin/common/kernels/proposalKernel.cu(34): warning: variable "ALIGNMENT" was declared but never referenced
[ 74%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin_static.dir/common/kernels/cropAndResizeKernel.cu.o
[ 75%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin_static.dir/common/kernels/decodeBBoxes.cu.o
/workspace/tensorRT/version-7.1/TensorRT/plugin/common/kernels/proposalKernel.cu(34): warning: variable "ALIGNMENT" was declared but never referenced
[ 76%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/roiPooling.cu.o
[ 77%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/rproiInferenceFused.cu.o
[ 78%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin_static.dir/common/kernels/detectionForward.cu.o
/workspace/tensorRT/version-7.1/TensorRT/plugin/common/kernels/proposalKernel.cu(34): warning: variable "ALIGNMENT" was declared but never referenced
[ 78%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin_static.dir/common/kernels/extractFgScores.cu.o
[ 79%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin_static.dir/common/kernels/gatherTopDetections.cu.o
[ 79%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/sortScoresPerClass.cu.o
[ 80%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/sortScoresPerImage.cu.o
[ 81%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/embLayerNormPlugin/embLayerNormKernel.cu.o
[ 81%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/geluPlugin/geluKernel.cu.o
/workspace/tensorRT/version-7.1/TensorRT/plugin/embLayerNormPlugin/embLayerNormKernel.cu(51): warning: variable "warp_m" was declared but never referenced
/workspace/tensorRT/version-7.1/TensorRT/plugin/embLayerNormPlugin/embLayerNormKernel.cu(51): warning: variable "warp_m" was declared but never referenced
[ 82%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin_static.dir/common/kernels/generateAnchors.cu.o
[ 83%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/bertQKVToContextPlugin/qkvToContext.cu.o
[ 84%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/skipLayerNormPlugin/skipLayerNormKernel.cu.o
[ 84%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin_static.dir/common/kernels/gridAnchorLayer.cu.o
[ 85%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin_static.dir/common/kernels/maskRCNNKernels.cu.o
[ 86%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin_static.dir/common/kernels/nmsLayer.cu.o
[ 86%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/InferPlugin.cpp.o
[ 87%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/__/samples/common/logger.cpp.o
[ 87%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin_static.dir/common/kernels/normalizeLayer.cu.o
[ 88%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin_static.dir/common/kernels/permuteData.cu.o
[ 89%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin_static.dir/common/kernels/priorBoxLayer.cu.o
[ 89%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin_static.dir/common/kernels/proposalKernel.cu.o
/workspace/tensorRT/version-7.1/TensorRT/plugin/common/kernels/proposalKernel.cu(34): warning: variable "ALIGNMENT" was declared but never referenced
[ 90%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin_static.dir/common/kernels/proposalsForward.cu.o
/workspace/tensorRT/version-7.1/TensorRT/plugin/common/kernels/proposalKernel.cu(34): warning: variable "ALIGNMENT" was declared but never referenced
/workspace/tensorRT/version-7.1/TensorRT/plugin/common/kernels/proposalKernel.cu(34): warning: variable "ALIGNMENT" was declared but never referenced
[ 91%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin_static.dir/common/kernels/regionForward.cu.o
/workspace/tensorRT/version-7.1/TensorRT/plugin/common/kernels/proposalKernel.cu(34): warning: variable "ALIGNMENT" was declared but never referenced
[ 91%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin_static.dir/common/kernels/reorgForward.cu.o
/workspace/tensorRT/version-7.1/TensorRT/plugin/common/kernels/proposalKernel.cu(34): warning: variable "ALIGNMENT" was declared but never referenced
[ 92%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin_static.dir/common/kernels/roiPooling.cu.o
[ 93%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin_static.dir/common/kernels/rproiInferenceFused.cu.o
[ 93%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin_static.dir/common/kernels/sortScoresPerClass.cu.o
[ 94%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin_static.dir/common/kernels/sortScoresPerImage.cu.o
[ 95%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin_static.dir/embLayerNormPlugin/embLayerNormKernel.cu.o
[ 95%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin_static.dir/geluPlugin/geluKernel.cu.o
[ 96%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin_static.dir/bertQKVToContextPlugin/qkvToContext.cu.o
/workspace/tensorRT/version-7.1/TensorRT/plugin/embLayerNormPlugin/embLayerNormKernel.cu(51): warning: variable "warp_m" was declared but never referenced
[ 97%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin_static.dir/skipLayerNormPlugin/skipLayerNormKernel.cu.o
/workspace/tensorRT/version-7.1/TensorRT/plugin/embLayerNormPlugin/embLayerNormKernel.cu(51): warning: variable "warp_m" was declared but never referenced
[ 97%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/InferPlugin.cpp.o
[ 98%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/__/samples/common/logger.cpp.o
[ 99%] Linking CXX shared library ../out/libnvinfer_plugin.so
[ 99%] Built target nvinfer_plugin
[100%] Linking CXX static library ../out/libnvinfer_plugin_static.a
[100%] Built target nvinfer_plugin_static
```
## Environment
**TensorRT Version**: 7.1
**GPU Type**: 1080i
**Nvidia Driver Version**: 450.66
**CUDA Version**: 10.2
**CUDNN Version**: 8.0
## Relevant Files
check out: my-libnvinfer_plugin.so and origin-libnvinfer_plugin.so
https://www.dropbox.com/sh/zpdvxjc673spqaj/AAAcbV-_F29SOdYnoZ9XqUHha?dl=0
|
1.0
|
Build tensorRT OSS successful but do not have any symbol plugin in library file. - ## Description
Hello I follow build TensorRT from source, but maybe have something miss take need help:
Check symbol plugin exist with command
```
nm -gDC file/libnvinfer_plugin.so
```
Bellow the step I build source.
add some PATH default:
```
export TRT_SOURCE=`pwd`
export TRT_RELEASE=`pwd`/release
```
make build folder and do cmake file
```
mkdir build && cd build
cmake .. -DTRT_LIB_DIR=$TRT_RELEASE/lib -DTRT_OUT_DIR=`pwd`/out -DBUILD_PLUGINS=ON -DBUILD_SAMPLES=OFF -DBUILD_PARSERS=OFF -DCUDA_VERSION=10.2
make -j12
```
output here:
```{r, attr.output='style="max-height: 100px;"'}
Building for TensorRT version: 7.1.3, library version: 7
-- Targeting TRT Platform: x86_64
-- CUDA version set to 10.2
-- cuDNN version set to 8.0
-- Protobuf version set to 3.0.0
-- ========================= Importing and creating target nvinfer ==========================
-- Looking for library nvinfer
-- Library that was found /usr/lib/x86_64-linux-gnu/libnvinfer.so
-- ==========================================================================================
-- ========================= Importing and creating target nvuffparser ==========================
-- Looking for library nvparsers
-- Library that was found /usr/lib/x86_64-linux-gnu/libnvparsers.so
-- ==========================================================================================
CMake Warning at CMakeLists.txt:157 (message):
Detected CUDA version is < 11.0. SM80 not supported.
-- GPU_ARCHS is not defined. Generating CUDA code for default SMs: 35;53;61;70;75
-- ========================= Importing and creating target nvcaffeparser ==========================
-- Looking for library nvparsers
-- Library that was found /usr/lib/x86_64-linux-gnu/libnvparsers.so
-- ==========================================================================================
-- ========================= Importing and creating target nvonnxparser ==========================
-- Looking for library nvonnxparser
-- Library that was found /usr/lib/x86_64-linux-gnu/libnvonnxparser.so
-- ==========================================================================================
-- Configuring done
-- Generating done
-- Build files have been written to: /workspace/tensorRT/version-7.1/TensorRT/build
```
```{r, attr.output='style="max-height: 100px;"'}
[ 0%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/nmsPlugin/nmsPlugin.cpp.o
[ 0%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/reorgPlugin/reorgPlugin.cpp.o
[ 1%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/batchedNMSPlugin/batchedNMSPlugin.cpp.o
[ 2%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/gridAnchorPlugin/gridAnchorPlugin.cpp.o
[ 2%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/nvFasterRCNN/nvFasterRCNNPlugin.cpp.o
[ 4%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/normalizePlugin/normalizePlugin.cpp.o
[ 4%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/regionPlugin/regionPlugin.cpp.o
[ 5%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/priorBoxPlugin/priorBoxPlugin.cpp.o
[ 5%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/nmsPlugin/nmsPlugin.cpp.o
[ 6%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/priorBoxPlugin/priorBoxPlugin.cpp.o
[ 7%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/flattenConcat/flattenConcat.cpp.o
[ 8%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/normalizePlugin/normalizePlugin.cpp.o
[ 8%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/cropAndResizePlugin/cropAndResizePlugin.cpp.o
[ 9%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/proposalPlugin/proposalPlugin.cpp.o
[ 9%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/reorgPlugin/reorgPlugin.cpp.o
[ 10%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/batchTilePlugin/batchTilePlugin.cpp.o
[ 10%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/detectionLayerPlugin/detectionLayerPlugin.cpp.o
[ 11%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/gridAnchorPlugin/gridAnchorPlugin.cpp.o
[ 12%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/proposalLayerPlugin/proposalLayerPlugin.cpp.o
[ 13%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/pyramidROIAlignPlugin/pyramidROIAlignPlugin.cpp.o
[ 14%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/regionPlugin/regionPlugin.cpp.o
[ 14%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/generateDetectionPlugin/generateDetectionPlugin.cpp.o
[ 15%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/multilevelProposeROI/multilevelProposeROIPlugin.cpp.o
[ 16%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/multilevelCropAndResizePlugin/multilevelCropAndResizePlugin.cpp.o
[ 16%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/resizeNearestPlugin/resizeNearestPlugin.cpp.o
[ 17%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/specialSlicePlugin/specialSlicePlugin.cpp.o
[ 17%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/nvFasterRCNN/nvFasterRCNNPlugin.cpp.o
[ 18%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/batchedNMSPlugin/batchedNMSPlugin.cpp.o
[ 19%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/instanceNormalizationPlugin/instanceNormalizationPlugin.cpp.o
/workspace/tensorRT/version-7.1/TensorRT/plugin/multilevelProposeROI/multilevelProposeROIPlugin.cpp: In member function 'virtual int nvinfer1::plugin::MultilevelProposeROI::enqueue(int, const void* const*, void**, void*, cudaStream_t)':
/workspace/tensorRT/version-7.1/TensorRT/plugin/multilevelProposeROI/multilevelProposeROIPlugin.cpp:395:44: warning: pointer of type 'void *' used in arithmetic [-Wpointer-arith]
mParam, proposal_ws, workspace + kernel_workspace_offset,
~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~
/workspace/tensorRT/version-7.1/TensorRT/plugin/multilevelProposeROI/multilevelProposeROIPlugin.cpp:408:19: warning: pointer of type 'void *' used in arithmetic [-Wpointer-arith]
workspace + kernel_workspace_offset, ctopk_ws, reinterpret_cast<void**>(mDeviceScores),
~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~
[ 19%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/groupNormalizationPlugin/groupNormalizationKernel.cu.o
[ 20%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/flattenConcat/flattenConcat.cpp.o
[ 21%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/groupNormalizationPlugin/groupNormalizationPlugin.cpp.o
[ 22%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/coordConvACPlugin/coordConvACPlugin.cpp.o
[ 23%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/fcPlugin/fcPlugin.cpp.o
[ 23%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/embLayerNormPlugin/embLayerNormPlugin.cpp.o
[ 23%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/cropAndResizePlugin/cropAndResizePlugin.cpp.o
[ 24%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/geluPlugin/geluPlugin.cpp.o
[ 25%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/proposalPlugin/proposalPlugin.cpp.o
[ 25%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/bertQKVToContextPlugin/fused_multihead_attention_fp16_128_64_kernel.sm75.cpp.o
[ 26%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/batchTilePlugin/batchTilePlugin.cpp.o
[ 26%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/detectionLayerPlugin/detectionLayerPlugin.cpp.o
[ 27%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/bertQKVToContextPlugin/fused_multihead_attention_fp16_128_64_kernel.sm80.cpp.o
[ 28%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/proposalLayerPlugin/proposalLayerPlugin.cpp.o
[ 29%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/bertQKVToContextPlugin/fused_multihead_attention_fp16_384_64_kernel.sm75.cpp.o
[ 29%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/bertQKVToContextPlugin/fused_multihead_attention_fp16_384_64_kernel.sm80.cpp.o
[ 30%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/bertQKVToContextPlugin/fused_multihead_attention_int8_128_64_kernel.sm75.cpp.o
[ 31%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/bertQKVToContextPlugin/fused_multihead_attention_int8_128_64_kernel.sm80.cpp.o
[ 32%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/pyramidROIAlignPlugin/pyramidROIAlignPlugin.cpp.o
[ 32%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/bertQKVToContextPlugin/fused_multihead_attention_int8_384_64_kernel.sm75.cpp.o
[ 32%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/generateDetectionPlugin/generateDetectionPlugin.cpp.o
[ 33%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/bertQKVToContextPlugin/fused_multihead_attention_int8_384_64_kernel.sm80.cpp.o
[ 34%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/multilevelProposeROI/multilevelProposeROIPlugin.cpp.o
[ 35%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/bertQKVToContextPlugin/qkvToContextPlugin.cpp.o
[ 35%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/skipLayerNormPlugin/skipLayerNormPlugin.cpp.o
[ 36%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/kernel.cpp.o
[ 37%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/multilevelCropAndResizePlugin/multilevelCropAndResizePlugin.cpp.o
[ 38%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/common/checkMacrosPlugin.cpp.o
[ 38%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/resizeNearestPlugin/resizeNearestPlugin.cpp.o
/workspace/tensorRT/version-7.1/TensorRT/plugin/multilevelProposeROI/multilevelProposeROIPlugin.cpp: In member function 'virtual int nvinfer1::plugin::MultilevelProposeROI::enqueue(int, const void* const*, void**, void*, cudaStream_t)':
/workspace/tensorRT/version-7.1/TensorRT/plugin/multilevelProposeROI/multilevelProposeROIPlugin.cpp:395:44: warning: pointer of type 'void *' used in arithmetic [-Wpointer-arith]
mParam, proposal_ws, workspace + kernel_workspace_offset,
~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~
/workspace/tensorRT/version-7.1/TensorRT/plugin/multilevelProposeROI/multilevelProposeROIPlugin.cpp:408:19: warning: pointer of type 'void *' used in arithmetic [-Wpointer-arith]
workspace + kernel_workspace_offset, ctopk_ws, reinterpret_cast<void**>(mDeviceScores),
~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~
[ 39%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/specialSlicePlugin/specialSlicePlugin.cpp.o
[ 39%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/common/cudaDriverWrapper.cpp.o
[ 40%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/common/nmsHelper.cpp.o
[ 41%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/common/reducedMathPlugin.cpp.o
[ 42%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/instanceNormalizationPlugin/instanceNormalizationPlugin.cpp.o
[ 42%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/batchedNMSPlugin/batchedNMSInference.cu.o
[ 43%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/batchedNMSPlugin/gatherNMSOutputs.cu.o
[ 43%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin_static.dir/groupNormalizationPlugin/groupNormalizationKernel.cu.o
[ 44%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/coordConvACPlugin/coordConvACPlugin.cu.o
[ 44%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/allClassNMS.cu.o
[ 45%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/groupNormalizationPlugin/groupNormalizationPlugin.cpp.o
[ 46%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/bboxDeltas2Proposals.cu.o
[ 47%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/coordConvACPlugin/coordConvACPlugin.cpp.o
[ 48%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/common.cu.o
[ 48%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/cropAndResizeKernel.cu.o
[ 48%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/embLayerNormPlugin/embLayerNormPlugin.cpp.o
/workspace/tensorRT/version-7.1/TensorRT/plugin/batchedNMSPlugin/gatherNMSOutputs.cu(82): warning: argument is incompatible with corresponding format string conversion
detected during:
instantiation of "void gatherNMSOutputs_kernel<T_BBOX,T_SCORE,nthds_per_cta>(__nv_bool, int, int, int, int, int, const int *, const T_SCORE *, const T_BBOX *, int *, T_BBOX *, T_BBOX *, T_BBOX *, __nv_bool) [with T_BBOX=float, T_SCORE=float, nthds_per_cta=32U]"
(117): here
instantiation of "pluginStatus_t gatherNMSOutputs_gpu<T_BBOX,T_SCORE>(cudaStream_t, __nv_bool, int, int, int, int, int, const void *, const void *, const void *, void *, void *, void *, void *, __nv_bool) [with T_BBOX=float, T_SCORE=float]"
(165): here
[ 49%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/fcPlugin/fcPlugin.cpp.o
[ 50%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/decodeBBoxes.cu.o
[ 51%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/geluPlugin/geluPlugin.cpp.o
/workspace/tensorRT/version-7.1/TensorRT/plugin/batchedNMSPlugin/gatherNMSOutputs.cu(82): warning: argument is incompatible with corresponding format string conversion
detected during:
instantiation of "void gatherNMSOutputs_kernel<T_BBOX,T_SCORE,nthds_per_cta>(__nv_bool, int, int, int, int, int, const int *, const T_SCORE *, const T_BBOX *, int *, T_BBOX *, T_BBOX *, T_BBOX *, __nv_bool) [with T_BBOX=float, T_SCORE=float, nthds_per_cta=32U]"
(117): here
instantiation of "pluginStatus_t gatherNMSOutputs_gpu<T_BBOX,T_SCORE>(cudaStream_t, __nv_bool, int, int, int, int, int, const void *, const void *, const void *, void *, void *, void *, void *, __nv_bool) [with T_BBOX=float, T_SCORE=float]"
(165): here
/workspace/tensorRT/version-7.1/TensorRT/plugin/batchedNMSPlugin/gatherNMSOutputs.cu(82): warning: argument is incompatible with corresponding format string conversion
detected during:
instantiation of "void gatherNMSOutputs_kernel<T_BBOX,T_SCORE,nthds_per_cta>(__nv_bool, int, int, int, int, int, const int *, const T_SCORE *, const T_BBOX *, int *, T_BBOX *, T_BBOX *, T_BBOX *, __nv_bool) [with T_BBOX=float, T_SCORE=float, nthds_per_cta=32U]"
(117): here
instantiation of "pluginStatus_t gatherNMSOutputs_gpu<T_BBOX,T_SCORE>(cudaStream_t, __nv_bool, int, int, int, int, int, const void *, const void *, const void *, void *, void *, void *, void *, __nv_bool) [with T_BBOX=float, T_SCORE=float]"
(165): here
[ 51%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/bertQKVToContextPlugin/fused_multihead_attention_fp16_128_64_kernel.sm75.cpp.o
[ 52%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/bertQKVToContextPlugin/fused_multihead_attention_fp16_128_64_kernel.sm80.cpp.o
[ 53%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/bertQKVToContextPlugin/fused_multihead_attention_fp16_384_64_kernel.sm75.cpp.o
[ 53%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/bertQKVToContextPlugin/fused_multihead_attention_fp16_384_64_kernel.sm80.cpp.o
[ 54%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/bertQKVToContextPlugin/fused_multihead_attention_int8_128_64_kernel.sm75.cpp.o
[ 55%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/bertQKVToContextPlugin/fused_multihead_attention_int8_128_64_kernel.sm80.cpp.o
[ 55%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/bertQKVToContextPlugin/fused_multihead_attention_int8_384_64_kernel.sm75.cpp.o
[ 56%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/bertQKVToContextPlugin/fused_multihead_attention_int8_384_64_kernel.sm80.cpp.o
[ 57%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/bertQKVToContextPlugin/qkvToContextPlugin.cpp.o
[ 57%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/skipLayerNormPlugin/skipLayerNormPlugin.cpp.o
[ 58%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/common/kernels/kernel.cpp.o
[ 59%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/common/checkMacrosPlugin.cpp.o
/workspace/tensorRT/version-7.1/TensorRT/plugin/batchedNMSPlugin/gatherNMSOutputs.cu(82): warning: argument is incompatible with corresponding format string conversion
detected during:
instantiation of "void gatherNMSOutputs_kernel<T_BBOX,T_SCORE,nthds_per_cta>(__nv_bool, int, int, int, int, int, const int *, const T_SCORE *, const T_BBOX *, int *, T_BBOX *, T_BBOX *, T_BBOX *, __nv_bool) [with T_BBOX=float, T_SCORE=float, nthds_per_cta=32U]"
(117): here
instantiation of "pluginStatus_t gatherNMSOutputs_gpu<T_BBOX,T_SCORE>(cudaStream_t, __nv_bool, int, int, int, int, int, const void *, const void *, const void *, void *, void *, void *, void *, __nv_bool) [with T_BBOX=float, T_SCORE=float]"
(165): here
[ 59%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/common/cudaDriverWrapper.cpp.o
[ 60%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/detectionForward.cu.o
[ 61%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/common/nmsHelper.cpp.o
[ 62%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/common/reducedMathPlugin.cpp.o
[ 62%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin_static.dir/batchedNMSPlugin/batchedNMSInference.cu.o
/workspace/tensorRT/version-7.1/TensorRT/plugin/batchedNMSPlugin/gatherNMSOutputs.cu(82): warning: argument is incompatible with corresponding format string conversion
detected during:
instantiation of "void gatherNMSOutputs_kernel<T_BBOX,T_SCORE,nthds_per_cta>(__nv_bool, int, int, int, int, int, const int *, const T_SCORE *, const T_BBOX *, int *, T_BBOX *, T_BBOX *, T_BBOX *, __nv_bool) [with T_BBOX=float, T_SCORE=float, nthds_per_cta=32U]"
(117): here
instantiation of "pluginStatus_t gatherNMSOutputs_gpu<T_BBOX,T_SCORE>(cudaStream_t, __nv_bool, int, int, int, int, int, const void *, const void *, const void *, void *, void *, void *, void *, __nv_bool) [with T_BBOX=float, T_SCORE=float]"
(165): here
[ 63%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin_static.dir/batchedNMSPlugin/gatherNMSOutputs.cu.o
[ 64%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin_static.dir/coordConvACPlugin/coordConvACPlugin.cu.o
[ 64%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin_static.dir/common/kernels/allClassNMS.cu.o
/workspace/tensorRT/version-7.1/TensorRT/plugin/batchedNMSPlugin/gatherNMSOutputs.cu(82): warning: argument is incompatible with corresponding format string conversion
detected during:
instantiation of "void gatherNMSOutputs_kernel<T_BBOX,T_SCORE,nthds_per_cta>(__nv_bool, int, int, int, int, int, const int *, const T_SCORE *, const T_BBOX *, int *, T_BBOX *, T_BBOX *, T_BBOX *, __nv_bool) [with T_BBOX=float, T_SCORE=float, nthds_per_cta=32U]"
(117): here
instantiation of "pluginStatus_t gatherNMSOutputs_gpu<T_BBOX,T_SCORE>(cudaStream_t, __nv_bool, int, int, int, int, int, const void *, const void *, const void *, void *, void *, void *, void *, __nv_bool) [with T_BBOX=float, T_SCORE=float]"
(165): here
[ 64%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/extractFgScores.cu.o
/workspace/tensorRT/version-7.1/TensorRT/plugin/batchedNMSPlugin/gatherNMSOutputs.cu(82): warning: argument is incompatible with corresponding format string conversion
detected during:
instantiation of "void gatherNMSOutputs_kernel<T_BBOX,T_SCORE,nthds_per_cta>(__nv_bool, int, int, int, int, int, const int *, const T_SCORE *, const T_BBOX *, int *, T_BBOX *, T_BBOX *, T_BBOX *, __nv_bool) [with T_BBOX=float, T_SCORE=float, nthds_per_cta=32U]"
(117): here
instantiation of "pluginStatus_t gatherNMSOutputs_gpu<T_BBOX,T_SCORE>(cudaStream_t, __nv_bool, int, int, int, int, int, const void *, const void *, const void *, void *, void *, void *, void *, __nv_bool) [with T_BBOX=float, T_SCORE=float]"
(165): here
[ 65%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/gatherTopDetections.cu.o
/workspace/tensorRT/version-7.1/TensorRT/plugin/batchedNMSPlugin/gatherNMSOutputs.cu(82): warning: argument is incompatible with corresponding format string conversion
detected during:
instantiation of "void gatherNMSOutputs_kernel<T_BBOX,T_SCORE,nthds_per_cta>(__nv_bool, int, int, int, int, int, const int *, const T_SCORE *, const T_BBOX *, int *, T_BBOX *, T_BBOX *, T_BBOX *, __nv_bool) [with T_BBOX=float, T_SCORE=float, nthds_per_cta=32U]"
(117): here
instantiation of "pluginStatus_t gatherNMSOutputs_gpu<T_BBOX,T_SCORE>(cudaStream_t, __nv_bool, int, int, int, int, int, const void *, const void *, const void *, void *, void *, void *, void *, __nv_bool) [with T_BBOX=float, T_SCORE=float]"
(165): here
[ 66%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/generateAnchors.cu.o
[ 66%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/gridAnchorLayer.cu.o
[ 67%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/maskRCNNKernels.cu.o
/workspace/tensorRT/version-7.1/TensorRT/plugin/batchedNMSPlugin/gatherNMSOutputs.cu(82): warning: argument is incompatible with corresponding format string conversion
detected during:
instantiation of "void gatherNMSOutputs_kernel<T_BBOX,T_SCORE,nthds_per_cta>(__nv_bool, int, int, int, int, int, const int *, const T_SCORE *, const T_BBOX *, int *, T_BBOX *, T_BBOX *, T_BBOX *, __nv_bool) [with T_BBOX=float, T_SCORE=float, nthds_per_cta=32U]"
(117): here
instantiation of "pluginStatus_t gatherNMSOutputs_gpu<T_BBOX,T_SCORE>(cudaStream_t, __nv_bool, int, int, int, int, int, const void *, const void *, const void *, void *, void *, void *, void *, __nv_bool) [with T_BBOX=float, T_SCORE=float]"
(165): here
/workspace/tensorRT/version-7.1/TensorRT/plugin/batchedNMSPlugin/gatherNMSOutputs.cu(82): warning: argument is incompatible with corresponding format string conversion
detected during:
instantiation of "void gatherNMSOutputs_kernel<T_BBOX,T_SCORE,nthds_per_cta>(__nv_bool, int, int, int, int, int, const int *, const T_SCORE *, const T_BBOX *, int *, T_BBOX *, T_BBOX *, T_BBOX *, __nv_bool) [with T_BBOX=float, T_SCORE=float, nthds_per_cta=32U]"
(117): here
instantiation of "pluginStatus_t gatherNMSOutputs_gpu<T_BBOX,T_SCORE>(cudaStream_t, __nv_bool, int, int, int, int, int, const void *, const void *, const void *, void *, void *, void *, void *, __nv_bool) [with T_BBOX=float, T_SCORE=float]"
(165): here
[ 68%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/nmsLayer.cu.o
[ 69%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin_static.dir/common/kernels/bboxDeltas2Proposals.cu.o
[ 70%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin_static.dir/common/kernels/common.cu.o
[ 70%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/normalizeLayer.cu.o
[ 71%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/permuteData.cu.o
[ 72%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/priorBoxLayer.cu.o
[ 72%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/proposalKernel.cu.o
[ 73%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/proposalsForward.cu.o
[ 74%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/regionForward.cu.o
/workspace/tensorRT/version-7.1/TensorRT/plugin/common/kernels/proposalKernel.cu(34): warning: variable "ALIGNMENT" was declared but never referenced
[ 74%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/reorgForward.cu.o
/workspace/tensorRT/version-7.1/TensorRT/plugin/common/kernels/proposalKernel.cu(34): warning: variable "ALIGNMENT" was declared but never referenced
/workspace/tensorRT/version-7.1/TensorRT/plugin/common/kernels/proposalKernel.cu(34): warning: variable "ALIGNMENT" was declared but never referenced
[ 74%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin_static.dir/common/kernels/cropAndResizeKernel.cu.o
[ 75%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin_static.dir/common/kernels/decodeBBoxes.cu.o
/workspace/tensorRT/version-7.1/TensorRT/plugin/common/kernels/proposalKernel.cu(34): warning: variable "ALIGNMENT" was declared but never referenced
[ 76%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/roiPooling.cu.o
[ 77%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/rproiInferenceFused.cu.o
[ 78%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin_static.dir/common/kernels/detectionForward.cu.o
/workspace/tensorRT/version-7.1/TensorRT/plugin/common/kernels/proposalKernel.cu(34): warning: variable "ALIGNMENT" was declared but never referenced
[ 78%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin_static.dir/common/kernels/extractFgScores.cu.o
[ 79%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin_static.dir/common/kernels/gatherTopDetections.cu.o
[ 79%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/sortScoresPerClass.cu.o
[ 80%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/sortScoresPerImage.cu.o
[ 81%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/embLayerNormPlugin/embLayerNormKernel.cu.o
[ 81%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/geluPlugin/geluKernel.cu.o
/workspace/tensorRT/version-7.1/TensorRT/plugin/embLayerNormPlugin/embLayerNormKernel.cu(51): warning: variable "warp_m" was declared but never referenced
/workspace/tensorRT/version-7.1/TensorRT/plugin/embLayerNormPlugin/embLayerNormKernel.cu(51): warning: variable "warp_m" was declared but never referenced
[ 82%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin_static.dir/common/kernels/generateAnchors.cu.o
[ 83%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/bertQKVToContextPlugin/qkvToContext.cu.o
[ 84%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/skipLayerNormPlugin/skipLayerNormKernel.cu.o
[ 84%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin_static.dir/common/kernels/gridAnchorLayer.cu.o
[ 85%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin_static.dir/common/kernels/maskRCNNKernels.cu.o
[ 86%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin_static.dir/common/kernels/nmsLayer.cu.o
[ 86%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/InferPlugin.cpp.o
[ 87%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/__/samples/common/logger.cpp.o
[ 87%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin_static.dir/common/kernels/normalizeLayer.cu.o
[ 88%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin_static.dir/common/kernels/permuteData.cu.o
[ 89%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin_static.dir/common/kernels/priorBoxLayer.cu.o
[ 89%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin_static.dir/common/kernels/proposalKernel.cu.o
/workspace/tensorRT/version-7.1/TensorRT/plugin/common/kernels/proposalKernel.cu(34): warning: variable "ALIGNMENT" was declared but never referenced
[ 90%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin_static.dir/common/kernels/proposalsForward.cu.o
/workspace/tensorRT/version-7.1/TensorRT/plugin/common/kernels/proposalKernel.cu(34): warning: variable "ALIGNMENT" was declared but never referenced
/workspace/tensorRT/version-7.1/TensorRT/plugin/common/kernels/proposalKernel.cu(34): warning: variable "ALIGNMENT" was declared but never referenced
[ 91%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin_static.dir/common/kernels/regionForward.cu.o
/workspace/tensorRT/version-7.1/TensorRT/plugin/common/kernels/proposalKernel.cu(34): warning: variable "ALIGNMENT" was declared but never referenced
[ 91%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin_static.dir/common/kernels/reorgForward.cu.o
/workspace/tensorRT/version-7.1/TensorRT/plugin/common/kernels/proposalKernel.cu(34): warning: variable "ALIGNMENT" was declared but never referenced
[ 92%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin_static.dir/common/kernels/roiPooling.cu.o
[ 93%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin_static.dir/common/kernels/rproiInferenceFused.cu.o
[ 93%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin_static.dir/common/kernels/sortScoresPerClass.cu.o
[ 94%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin_static.dir/common/kernels/sortScoresPerImage.cu.o
[ 95%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin_static.dir/embLayerNormPlugin/embLayerNormKernel.cu.o
[ 95%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin_static.dir/geluPlugin/geluKernel.cu.o
[ 96%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin_static.dir/bertQKVToContextPlugin/qkvToContext.cu.o
/workspace/tensorRT/version-7.1/TensorRT/plugin/embLayerNormPlugin/embLayerNormKernel.cu(51): warning: variable "warp_m" was declared but never referenced
[ 97%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin_static.dir/skipLayerNormPlugin/skipLayerNormKernel.cu.o
/workspace/tensorRT/version-7.1/TensorRT/plugin/embLayerNormPlugin/embLayerNormKernel.cu(51): warning: variable "warp_m" was declared but never referenced
[ 97%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/InferPlugin.cpp.o
[ 98%] Building CXX object plugin/CMakeFiles/nvinfer_plugin_static.dir/__/samples/common/logger.cpp.o
[ 99%] Linking CXX shared library ../out/libnvinfer_plugin.so
[ 99%] Built target nvinfer_plugin
[100%] Linking CXX static library ../out/libnvinfer_plugin_static.a
[100%] Built target nvinfer_plugin_static
```
## Environment
**TensorRT Version**: 7.1
**GPU Type**: 1080i
**Nvidia Driver Version**: 450.66
**CUDA Version**: 10.2
**CUDNN Version**: 8.0
## Relevant Files
check out: my-libnvinfer_plugin.so and origin-libnvinfer_plugin.so
https://www.dropbox.com/sh/zpdvxjc673spqaj/AAAcbV-_F29SOdYnoZ9XqUHha?dl=0
|
non_comp
|
build tensorrt oss successful but do not have any symbol plugin in library file description hello i follow build tensorrt from source but maybe have something miss take need help check symbol plugin exist with command nm gdc file libnvinfer plugin so bellow the step i build source add some path default export trt source pwd export trt release pwd release make build folder and do cmake file mkdir build cd build cmake dtrt lib dir trt release lib dtrt out dir pwd out dbuild plugins on dbuild samples off dbuild parsers off dcuda version make output here r attr output style max height building for tensorrt version library version targeting trt platform cuda version set to cudnn version set to protobuf version set to importing and creating target nvinfer looking for library nvinfer library that was found usr lib linux gnu libnvinfer so importing and creating target nvuffparser looking for library nvparsers library that was found usr lib linux gnu libnvparsers so cmake warning at cmakelists txt message detected cuda version is not supported gpu archs is not defined generating cuda code for default sms importing and creating target nvcaffeparser looking for library nvparsers library that was found usr lib linux gnu libnvparsers so importing and creating target nvonnxparser looking for library nvonnxparser library that was found usr lib linux gnu libnvonnxparser so configuring done generating done build files have been written to workspace tensorrt version tensorrt build r attr output style max height building cxx object plugin cmakefiles nvinfer plugin dir nmsplugin nmsplugin cpp o building cxx object plugin cmakefiles nvinfer plugin dir reorgplugin reorgplugin cpp o building cxx object plugin cmakefiles nvinfer plugin dir batchednmsplugin batchednmsplugin cpp o building cxx object plugin cmakefiles nvinfer plugin dir gridanchorplugin gridanchorplugin cpp o building cxx object plugin cmakefiles nvinfer plugin dir nvfasterrcnn nvfasterrcnnplugin cpp o building cxx object plugin cmakefiles nvinfer plugin static dir normalizeplugin normalizeplugin cpp o building cxx object plugin cmakefiles nvinfer plugin dir regionplugin regionplugin cpp o building cxx object plugin cmakefiles nvinfer plugin dir priorboxplugin priorboxplugin cpp o building cxx object plugin cmakefiles nvinfer plugin static dir nmsplugin nmsplugin cpp o building cxx object plugin cmakefiles nvinfer plugin static dir priorboxplugin priorboxplugin cpp o building cxx object plugin cmakefiles nvinfer plugin dir flattenconcat flattenconcat cpp o building cxx object plugin cmakefiles nvinfer plugin dir normalizeplugin normalizeplugin cpp o building cxx object plugin cmakefiles nvinfer plugin dir cropandresizeplugin cropandresizeplugin cpp o building cxx object plugin cmakefiles nvinfer plugin dir proposalplugin proposalplugin cpp o building cxx object plugin cmakefiles nvinfer plugin static dir reorgplugin reorgplugin cpp o building cxx object plugin cmakefiles nvinfer plugin dir batchtileplugin batchtileplugin cpp o building cxx object plugin cmakefiles nvinfer plugin dir detectionlayerplugin detectionlayerplugin cpp o building cxx object plugin cmakefiles nvinfer plugin static dir gridanchorplugin gridanchorplugin cpp o building cxx object plugin cmakefiles nvinfer plugin dir proposallayerplugin proposallayerplugin cpp o building cxx object plugin cmakefiles nvinfer plugin dir pyramidroialignplugin pyramidroialignplugin cpp o building cxx object plugin cmakefiles nvinfer plugin static dir regionplugin regionplugin cpp o building cxx object plugin cmakefiles nvinfer plugin dir generatedetectionplugin generatedetectionplugin cpp o building cxx object plugin cmakefiles nvinfer plugin dir multilevelproposeroi multilevelproposeroiplugin cpp o building cxx object plugin cmakefiles nvinfer plugin dir multilevelcropandresizeplugin multilevelcropandresizeplugin cpp o building cxx object plugin cmakefiles nvinfer plugin dir resizenearestplugin resizenearestplugin cpp o building cxx object plugin cmakefiles nvinfer plugin dir specialsliceplugin specialsliceplugin cpp o building cxx object plugin cmakefiles nvinfer plugin static dir nvfasterrcnn nvfasterrcnnplugin cpp o building cxx object plugin cmakefiles nvinfer plugin static dir batchednmsplugin batchednmsplugin cpp o building cxx object plugin cmakefiles nvinfer plugin dir instancenormalizationplugin instancenormalizationplugin cpp o workspace tensorrt version tensorrt plugin multilevelproposeroi multilevelproposeroiplugin cpp in member function virtual int plugin multilevelproposeroi enqueue int const void const void void cudastream t workspace tensorrt version tensorrt plugin multilevelproposeroi multilevelproposeroiplugin cpp warning pointer of type void used in arithmetic mparam proposal ws workspace kernel workspace offset workspace tensorrt version tensorrt plugin multilevelproposeroi multilevelproposeroiplugin cpp warning pointer of type void used in arithmetic workspace kernel workspace offset ctopk ws reinterpret cast mdevicescores building cuda object plugin cmakefiles nvinfer plugin dir groupnormalizationplugin groupnormalizationkernel cu o building cxx object plugin cmakefiles nvinfer plugin static dir flattenconcat flattenconcat cpp o building cxx object plugin cmakefiles nvinfer plugin dir groupnormalizationplugin groupnormalizationplugin cpp o building cxx object plugin cmakefiles nvinfer plugin dir coordconvacplugin coordconvacplugin cpp o building cxx object plugin cmakefiles nvinfer plugin dir fcplugin fcplugin cpp o building cxx object plugin cmakefiles nvinfer plugin dir emblayernormplugin emblayernormplugin cpp o building cxx object plugin cmakefiles nvinfer plugin static dir cropandresizeplugin cropandresizeplugin cpp o building cxx object plugin cmakefiles nvinfer plugin dir geluplugin geluplugin cpp o building cxx object plugin cmakefiles nvinfer plugin static dir proposalplugin proposalplugin cpp o building cxx object plugin cmakefiles nvinfer plugin dir bertqkvtocontextplugin fused multihead attention kernel cpp o building cxx object plugin cmakefiles nvinfer plugin static dir batchtileplugin batchtileplugin cpp o building cxx object plugin cmakefiles nvinfer plugin static dir detectionlayerplugin detectionlayerplugin cpp o building cxx object plugin cmakefiles nvinfer plugin dir bertqkvtocontextplugin fused multihead attention kernel cpp o building cxx object plugin cmakefiles nvinfer plugin static dir proposallayerplugin proposallayerplugin cpp o building cxx object plugin cmakefiles nvinfer plugin dir bertqkvtocontextplugin fused multihead attention kernel cpp o building cxx object plugin cmakefiles nvinfer plugin dir bertqkvtocontextplugin fused multihead attention kernel cpp o building cxx object plugin cmakefiles nvinfer plugin dir bertqkvtocontextplugin fused multihead attention kernel cpp o building cxx object plugin cmakefiles nvinfer plugin dir bertqkvtocontextplugin fused multihead attention kernel cpp o building cxx object plugin cmakefiles nvinfer plugin static dir pyramidroialignplugin pyramidroialignplugin cpp o building cxx object plugin cmakefiles nvinfer plugin dir bertqkvtocontextplugin fused multihead attention kernel cpp o building cxx object plugin cmakefiles nvinfer plugin static dir generatedetectionplugin generatedetectionplugin cpp o building cxx object plugin cmakefiles nvinfer plugin dir bertqkvtocontextplugin fused multihead attention kernel cpp o building cxx object plugin cmakefiles nvinfer plugin static dir multilevelproposeroi multilevelproposeroiplugin cpp o building cxx object plugin cmakefiles nvinfer plugin dir bertqkvtocontextplugin qkvtocontextplugin cpp o building cxx object plugin cmakefiles nvinfer plugin dir skiplayernormplugin skiplayernormplugin cpp o building cxx object plugin cmakefiles nvinfer plugin dir common kernels kernel cpp o building cxx object plugin cmakefiles nvinfer plugin static dir multilevelcropandresizeplugin multilevelcropandresizeplugin cpp o building cxx object plugin cmakefiles nvinfer plugin dir common checkmacrosplugin cpp o building cxx object plugin cmakefiles nvinfer plugin static dir resizenearestplugin resizenearestplugin cpp o workspace tensorrt version tensorrt plugin multilevelproposeroi multilevelproposeroiplugin cpp in member function virtual int plugin multilevelproposeroi enqueue int const void const void void cudastream t workspace tensorrt version tensorrt plugin multilevelproposeroi multilevelproposeroiplugin cpp warning pointer of type void used in arithmetic mparam proposal ws workspace kernel workspace offset workspace tensorrt version tensorrt plugin multilevelproposeroi multilevelproposeroiplugin cpp warning pointer of type void used in arithmetic workspace kernel workspace offset ctopk ws reinterpret cast mdevicescores building cxx object plugin cmakefiles nvinfer plugin static dir specialsliceplugin specialsliceplugin cpp o building cxx object plugin cmakefiles nvinfer plugin dir common cudadriverwrapper cpp o building cxx object plugin cmakefiles nvinfer plugin dir common nmshelper cpp o building cxx object plugin cmakefiles nvinfer plugin dir common reducedmathplugin cpp o building cxx object plugin cmakefiles nvinfer plugin static dir instancenormalizationplugin instancenormalizationplugin cpp o building cuda object plugin cmakefiles nvinfer plugin dir batchednmsplugin batchednmsinference cu o building cuda object plugin cmakefiles nvinfer plugin dir batchednmsplugin gathernmsoutputs cu o building cuda object plugin cmakefiles nvinfer plugin static dir groupnormalizationplugin groupnormalizationkernel cu o building cuda object plugin cmakefiles nvinfer plugin dir coordconvacplugin coordconvacplugin cu o building cuda object plugin cmakefiles nvinfer plugin dir common kernels allclassnms cu o building cxx object plugin cmakefiles nvinfer plugin static dir groupnormalizationplugin groupnormalizationplugin cpp o building cuda object plugin cmakefiles nvinfer plugin dir common kernels cu o building cxx object plugin cmakefiles nvinfer plugin static dir coordconvacplugin coordconvacplugin cpp o building cuda object plugin cmakefiles nvinfer plugin dir common kernels common cu o building cuda object plugin cmakefiles nvinfer plugin dir common kernels cropandresizekernel cu o building cxx object plugin cmakefiles nvinfer plugin static dir emblayernormplugin emblayernormplugin cpp o workspace tensorrt version tensorrt plugin batchednmsplugin gathernmsoutputs cu warning argument is incompatible with corresponding format string conversion detected during instantiation of void gathernmsoutputs kernel nv bool int int int int int const int const t score const t bbox int t bbox t bbox t bbox nv bool here instantiation of pluginstatus t gathernmsoutputs gpu cudastream t nv bool int int int int int const void const void const void void void void void nv bool here building cxx object plugin cmakefiles nvinfer plugin static dir fcplugin fcplugin cpp o building cuda object plugin cmakefiles nvinfer plugin dir common kernels decodebboxes cu o building cxx object plugin cmakefiles nvinfer plugin static dir geluplugin geluplugin cpp o workspace tensorrt version tensorrt plugin batchednmsplugin gathernmsoutputs cu warning argument is incompatible with corresponding format string conversion detected during instantiation of void gathernmsoutputs kernel nv bool int int int int int const int const t score const t bbox int t bbox t bbox t bbox nv bool here instantiation of pluginstatus t gathernmsoutputs gpu cudastream t nv bool int int int int int const void const void const void void void void void nv bool here workspace tensorrt version tensorrt plugin batchednmsplugin gathernmsoutputs cu warning argument is incompatible with corresponding format string conversion detected during instantiation of void gathernmsoutputs kernel nv bool int int int int int const int const t score const t bbox int t bbox t bbox t bbox nv bool here instantiation of pluginstatus t gathernmsoutputs gpu cudastream t nv bool int int int int int const void const void const void void void void void nv bool here building cxx object plugin cmakefiles nvinfer plugin static dir bertqkvtocontextplugin fused multihead attention kernel cpp o building cxx object plugin cmakefiles nvinfer plugin static dir bertqkvtocontextplugin fused multihead attention kernel cpp o building cxx object plugin cmakefiles nvinfer plugin static dir bertqkvtocontextplugin fused multihead attention kernel cpp o building cxx object plugin cmakefiles nvinfer plugin static dir bertqkvtocontextplugin fused multihead attention kernel cpp o building cxx object plugin cmakefiles nvinfer plugin static dir bertqkvtocontextplugin fused multihead attention kernel cpp o building cxx object plugin cmakefiles nvinfer plugin static dir bertqkvtocontextplugin fused multihead attention kernel cpp o building cxx object plugin cmakefiles nvinfer plugin static dir bertqkvtocontextplugin fused multihead attention kernel cpp o building cxx object plugin cmakefiles nvinfer plugin static dir bertqkvtocontextplugin fused multihead attention kernel cpp o building cxx object plugin cmakefiles nvinfer plugin static dir bertqkvtocontextplugin qkvtocontextplugin cpp o building cxx object plugin cmakefiles nvinfer plugin static dir skiplayernormplugin skiplayernormplugin cpp o building cxx object plugin cmakefiles nvinfer plugin static dir common kernels kernel cpp o building cxx object plugin cmakefiles nvinfer plugin static dir common checkmacrosplugin cpp o workspace tensorrt version tensorrt plugin batchednmsplugin gathernmsoutputs cu warning argument is incompatible with corresponding format string conversion detected during instantiation of void gathernmsoutputs kernel nv bool int int int int int const int const t score const t bbox int t bbox t bbox t bbox nv bool here instantiation of pluginstatus t gathernmsoutputs gpu cudastream t nv bool int int int int int const void const void const void void void void void nv bool here building cxx object plugin cmakefiles nvinfer plugin static dir common cudadriverwrapper cpp o building cuda object plugin cmakefiles nvinfer plugin dir common kernels detectionforward cu o building cxx object plugin cmakefiles nvinfer plugin static dir common nmshelper cpp o building cxx object plugin cmakefiles nvinfer plugin static dir common reducedmathplugin cpp o building cuda object plugin cmakefiles nvinfer plugin static dir batchednmsplugin batchednmsinference cu o workspace tensorrt version tensorrt plugin batchednmsplugin gathernmsoutputs cu warning argument is incompatible with corresponding format string conversion detected during instantiation of void gathernmsoutputs kernel nv bool int int int int int const int const t score const t bbox int t bbox t bbox t bbox nv bool here instantiation of pluginstatus t gathernmsoutputs gpu cudastream t nv bool int int int int int const void const void const void void void void void nv bool here building cuda object plugin cmakefiles nvinfer plugin static dir batchednmsplugin gathernmsoutputs cu o building cuda object plugin cmakefiles nvinfer plugin static dir coordconvacplugin coordconvacplugin cu o building cuda object plugin cmakefiles nvinfer plugin static dir common kernels allclassnms cu o workspace tensorrt version tensorrt plugin batchednmsplugin gathernmsoutputs cu warning argument is incompatible with corresponding format string conversion detected during instantiation of void gathernmsoutputs kernel nv bool int int int int int const int const t score const t bbox int t bbox t bbox t bbox nv bool here instantiation of pluginstatus t gathernmsoutputs gpu cudastream t nv bool int int int int int const void const void const void void void void void nv bool here building cuda object plugin cmakefiles nvinfer plugin dir common kernels extractfgscores cu o workspace tensorrt version tensorrt plugin batchednmsplugin gathernmsoutputs cu warning argument is incompatible with corresponding format string conversion detected during instantiation of void gathernmsoutputs kernel nv bool int int int int int const int const t score const t bbox int t bbox t bbox t bbox nv bool here instantiation of pluginstatus t gathernmsoutputs gpu cudastream t nv bool int int int int int const void const void const void void void void void nv bool here building cuda object plugin cmakefiles nvinfer plugin dir common kernels gathertopdetections cu o workspace tensorrt version tensorrt plugin batchednmsplugin gathernmsoutputs cu warning argument is incompatible with corresponding format string conversion detected during instantiation of void gathernmsoutputs kernel nv bool int int int int int const int const t score const t bbox int t bbox t bbox t bbox nv bool here instantiation of pluginstatus t gathernmsoutputs gpu cudastream t nv bool int int int int int const void const void const void void void void void nv bool here building cuda object plugin cmakefiles nvinfer plugin dir common kernels generateanchors cu o building cuda object plugin cmakefiles nvinfer plugin dir common kernels gridanchorlayer cu o building cuda object plugin cmakefiles nvinfer plugin dir common kernels maskrcnnkernels cu o workspace tensorrt version tensorrt plugin batchednmsplugin gathernmsoutputs cu warning argument is incompatible with corresponding format string conversion detected during instantiation of void gathernmsoutputs kernel nv bool int int int int int const int const t score const t bbox int t bbox t bbox t bbox nv bool here instantiation of pluginstatus t gathernmsoutputs gpu cudastream t nv bool int int int int int const void const void const void void void void void nv bool here workspace tensorrt version tensorrt plugin batchednmsplugin gathernmsoutputs cu warning argument is incompatible with corresponding format string conversion detected during instantiation of void gathernmsoutputs kernel nv bool int int int int int const int const t score const t bbox int t bbox t bbox t bbox nv bool here instantiation of pluginstatus t gathernmsoutputs gpu cudastream t nv bool int int int int int const void const void const void void void void void nv bool here building cuda object plugin cmakefiles nvinfer plugin dir common kernels nmslayer cu o building cuda object plugin cmakefiles nvinfer plugin static dir common kernels cu o building cuda object plugin cmakefiles nvinfer plugin static dir common kernels common cu o building cuda object plugin cmakefiles nvinfer plugin dir common kernels normalizelayer cu o building cuda object plugin cmakefiles nvinfer plugin dir common kernels permutedata cu o building cuda object plugin cmakefiles nvinfer plugin dir common kernels priorboxlayer cu o building cuda object plugin cmakefiles nvinfer plugin dir common kernels proposalkernel cu o building cuda object plugin cmakefiles nvinfer plugin dir common kernels proposalsforward cu o building cuda object plugin cmakefiles nvinfer plugin dir common kernels regionforward cu o workspace tensorrt version tensorrt plugin common kernels proposalkernel cu warning variable alignment was declared but never referenced building cuda object plugin cmakefiles nvinfer plugin dir common kernels reorgforward cu o workspace tensorrt version tensorrt plugin common kernels proposalkernel cu warning variable alignment was declared but never referenced workspace tensorrt version tensorrt plugin common kernels proposalkernel cu warning variable alignment was declared but never referenced building cuda object plugin cmakefiles nvinfer plugin static dir common kernels cropandresizekernel cu o building cuda object plugin cmakefiles nvinfer plugin static dir common kernels decodebboxes cu o workspace tensorrt version tensorrt plugin common kernels proposalkernel cu warning variable alignment was declared but never referenced building cuda object plugin cmakefiles nvinfer plugin dir common kernels roipooling cu o building cuda object plugin cmakefiles nvinfer plugin dir common kernels rproiinferencefused cu o building cuda object plugin cmakefiles nvinfer plugin static dir common kernels detectionforward cu o workspace tensorrt version tensorrt plugin common kernels proposalkernel cu warning variable alignment was declared but never referenced building cuda object plugin cmakefiles nvinfer plugin static dir common kernels extractfgscores cu o building cuda object plugin cmakefiles nvinfer plugin static dir common kernels gathertopdetections cu o building cuda object plugin cmakefiles nvinfer plugin dir common kernels sortscoresperclass cu o building cuda object plugin cmakefiles nvinfer plugin dir common kernels sortscoresperimage cu o building cuda object plugin cmakefiles nvinfer plugin dir emblayernormplugin emblayernormkernel cu o building cuda object plugin cmakefiles nvinfer plugin dir geluplugin gelukernel cu o workspace tensorrt version tensorrt plugin emblayernormplugin emblayernormkernel cu warning variable warp m was declared but never referenced workspace tensorrt version tensorrt plugin emblayernormplugin emblayernormkernel cu warning variable warp m was declared but never referenced building cuda object plugin cmakefiles nvinfer plugin static dir common kernels generateanchors cu o building cuda object plugin cmakefiles nvinfer plugin dir bertqkvtocontextplugin qkvtocontext cu o building cuda object plugin cmakefiles nvinfer plugin dir skiplayernormplugin skiplayernormkernel cu o building cuda object plugin cmakefiles nvinfer plugin static dir common kernels gridanchorlayer cu o building cuda object plugin cmakefiles nvinfer plugin static dir common kernels maskrcnnkernels cu o building cuda object plugin cmakefiles nvinfer plugin static dir common kernels nmslayer cu o building cxx object plugin cmakefiles nvinfer plugin dir inferplugin cpp o building cxx object plugin cmakefiles nvinfer plugin dir samples common logger cpp o building cuda object plugin cmakefiles nvinfer plugin static dir common kernels normalizelayer cu o building cuda object plugin cmakefiles nvinfer plugin static dir common kernels permutedata cu o building cuda object plugin cmakefiles nvinfer plugin static dir common kernels priorboxlayer cu o building cuda object plugin cmakefiles nvinfer plugin static dir common kernels proposalkernel cu o workspace tensorrt version tensorrt plugin common kernels proposalkernel cu warning variable alignment was declared but never referenced building cuda object plugin cmakefiles nvinfer plugin static dir common kernels proposalsforward cu o workspace tensorrt version tensorrt plugin common kernels proposalkernel cu warning variable alignment was declared but never referenced workspace tensorrt version tensorrt plugin common kernels proposalkernel cu warning variable alignment was declared but never referenced building cuda object plugin cmakefiles nvinfer plugin static dir common kernels regionforward cu o workspace tensorrt version tensorrt plugin common kernels proposalkernel cu warning variable alignment was declared but never referenced building cuda object plugin cmakefiles nvinfer plugin static dir common kernels reorgforward cu o workspace tensorrt version tensorrt plugin common kernels proposalkernel cu warning variable alignment was declared but never referenced building cuda object plugin cmakefiles nvinfer plugin static dir common kernels roipooling cu o building cuda object plugin cmakefiles nvinfer plugin static dir common kernels rproiinferencefused cu o building cuda object plugin cmakefiles nvinfer plugin static dir common kernels sortscoresperclass cu o building cuda object plugin cmakefiles nvinfer plugin static dir common kernels sortscoresperimage cu o building cuda object plugin cmakefiles nvinfer plugin static dir emblayernormplugin emblayernormkernel cu o building cuda object plugin cmakefiles nvinfer plugin static dir geluplugin gelukernel cu o building cuda object plugin cmakefiles nvinfer plugin static dir bertqkvtocontextplugin qkvtocontext cu o workspace tensorrt version tensorrt plugin emblayernormplugin emblayernormkernel cu warning variable warp m was declared but never referenced building cuda object plugin cmakefiles nvinfer plugin static dir skiplayernormplugin skiplayernormkernel cu o workspace tensorrt version tensorrt plugin emblayernormplugin emblayernormkernel cu warning variable warp m was declared but never referenced building cxx object plugin cmakefiles nvinfer plugin static dir inferplugin cpp o building cxx object plugin cmakefiles nvinfer plugin static dir samples common logger cpp o linking cxx shared library out libnvinfer plugin so built target nvinfer plugin linking cxx static library out libnvinfer plugin static a built target nvinfer plugin static environment tensorrt version gpu type nvidia driver version cuda version cudnn version relevant files check out my libnvinfer plugin so and origin libnvinfer plugin so
| 0
|
19,382
| 26,903,371,342
|
IssuesEvent
|
2023-02-06 17:09:06
|
storybookjs/storybook
|
https://api.github.com/repos/storybookjs/storybook
|
closed
|
@emotion/core upgrade along with React 17 throws error
|
question / support compatibility with other tools PN
|
I recently upgraded Storybook to 6.1.7, React - 17, @emotion/core - 11.0.0, @emotion/react - 11.1.1. I have updated the import like this **`import { jsx } from '@emotion/react'`** in all files. But Storybook is not reflected with the change.
It shows the following error .

|
True
|
@emotion/core upgrade along with React 17 throws error - I recently upgraded Storybook to 6.1.7, React - 17, @emotion/core - 11.0.0, @emotion/react - 11.1.1. I have updated the import like this **`import { jsx } from '@emotion/react'`** in all files. But Storybook is not reflected with the change.
It shows the following error .

|
comp
|
emotion core upgrade along with react throws error i recently upgraded storybook to react emotion core emotion react i have updated the import like this import jsx from emotion react in all files but storybook is not reflected with the change it shows the following error
| 1
|
308,084
| 9,430,012,495
|
IssuesEvent
|
2019-04-12 07:56:21
|
cbabnik/discord_bot
|
https://api.github.com/repos/cbabnik/discord_bot
|
closed
|
json periodic backups
|
high priority
|
This is pretty important. We don't want people losing their hard earned credits.
|
1.0
|
json periodic backups - This is pretty important. We don't want people losing their hard earned credits.
|
non_comp
|
json periodic backups this is pretty important we don t want people losing their hard earned credits
| 0
|
588,657
| 17,664,081,627
|
IssuesEvent
|
2021-08-22 05:02:52
|
pombase/website
|
https://api.github.com/repos/pombase/website
|
closed
|
Values in wrong columns on reference page for Compara orthologs
|
bug needs checking high priority orthologs
|
The first three lines here are fine, but the other two have the pombe and japonicus genes in the wrong columns. Very strange! I'll look into this next in case it's a deeper problem.
https://www.japonicusdb.org/reference/PMID:26896847

|
1.0
|
Values in wrong columns on reference page for Compara orthologs - The first three lines here are fine, but the other two have the pombe and japonicus genes in the wrong columns. Very strange! I'll look into this next in case it's a deeper problem.
https://www.japonicusdb.org/reference/PMID:26896847

|
non_comp
|
values in wrong columns on reference page for compara orthologs the first three lines here are fine but the other two have the pombe and japonicus genes in the wrong columns very strange i ll look into this next in case it s a deeper problem
| 0
|
1,443
| 3,965,779,176
|
IssuesEvent
|
2016-05-03 09:54:04
|
bundler/bundler-features
|
https://api.github.com/repos/bundler/bundler-features
|
closed
|
Bundler.with_clean_env should only reset bundler specific variables
|
incompatible
|
`Bundler.with_clean_env` currently resets the environment to the state when `Bundler` was required. This makes setting the `ENV` constant from Ruby ineffective. I think `with_clean_env` should only reset the variables that can actually influence the execution of Bundler.
I am using environment variables to configure a complicated test setup. I was previously using an own implementation of `with_clean_env` that broke when upgrading to a newer bundler version in one specific case.
Would this be a feature you are interested in?
|
True
|
Bundler.with_clean_env should only reset bundler specific variables - `Bundler.with_clean_env` currently resets the environment to the state when `Bundler` was required. This makes setting the `ENV` constant from Ruby ineffective. I think `with_clean_env` should only reset the variables that can actually influence the execution of Bundler.
I am using environment variables to configure a complicated test setup. I was previously using an own implementation of `with_clean_env` that broke when upgrading to a newer bundler version in one specific case.
Would this be a feature you are interested in?
|
comp
|
bundler with clean env should only reset bundler specific variables bundler with clean env currently resets the environment to the state when bundler was required this makes setting the env constant from ruby ineffective i think with clean env should only reset the variables that can actually influence the execution of bundler i am using environment variables to configure a complicated test setup i was previously using an own implementation of with clean env that broke when upgrading to a newer bundler version in one specific case would this be a feature you are interested in
| 1
|
84,563
| 16,516,894,182
|
IssuesEvent
|
2021-05-26 10:37:19
|
galasa-dev/projectmanagement
|
https://api.github.com/repos/galasa-dev/projectmanagement
|
closed
|
Need to find out requirements to make Galasa java 11 compliant
|
Eclipse Epic Framework customer-enhancement longterm-heatmap vscode
|
Currently mostly affecting the VS code plugin as the RedHat langage support for Java has enforced the use of java 11 and above, which galasa does not currently support.
As mentioned by Richard, eclipse will soon be moving to java 11 too: https://www.eclipse.org/lists/eclipse-pmc/msg03821.html
|
1.0
|
Need to find out requirements to make Galasa java 11 compliant - Currently mostly affecting the VS code plugin as the RedHat langage support for Java has enforced the use of java 11 and above, which galasa does not currently support.
As mentioned by Richard, eclipse will soon be moving to java 11 too: https://www.eclipse.org/lists/eclipse-pmc/msg03821.html
|
non_comp
|
need to find out requirements to make galasa java compliant currently mostly affecting the vs code plugin as the redhat langage support for java has enforced the use of java and above which galasa does not currently support as mentioned by richard eclipse will soon be moving to java too
| 0
|
9,860
| 11,885,531,400
|
IssuesEvent
|
2020-03-27 19:48:52
|
csf-dev/CSF.Utils
|
https://api.github.com/repos/csf-dev/CSF.Utils
|
closed
|
Remove some obsolete enum parsing logic
|
breaks-compatibility enhancement
|
Some of this logic is duplicated in the framework now, there's no need to keep it.
The only thing worth keeping is the try parse stuff. This should be moved to a small service type.
|
True
|
Remove some obsolete enum parsing logic - Some of this logic is duplicated in the framework now, there's no need to keep it.
The only thing worth keeping is the try parse stuff. This should be moved to a small service type.
|
comp
|
remove some obsolete enum parsing logic some of this logic is duplicated in the framework now there s no need to keep it the only thing worth keeping is the try parse stuff this should be moved to a small service type
| 1
|
97,567
| 11,017,528,470
|
IssuesEvent
|
2019-12-05 08:39:34
|
teamdigitale/confini-amministrativi-istat
|
https://api.github.com/repos/teamdigitale/confini-amministrativi-istat
|
opened
|
[CHANGELOG] Progettazione e sviluppo
|
documentation
|
Questa issue in sola lettura riporta il CHANGELOG del progetto, per rimanere aggiornato e ricevere le notifiche anche via mail basta sottoscriverla (vedi proposta #2).
|
1.0
|
[CHANGELOG] Progettazione e sviluppo - Questa issue in sola lettura riporta il CHANGELOG del progetto, per rimanere aggiornato e ricevere le notifiche anche via mail basta sottoscriverla (vedi proposta #2).
|
non_comp
|
progettazione e sviluppo questa issue in sola lettura riporta il changelog del progetto per rimanere aggiornato e ricevere le notifiche anche via mail basta sottoscriverla vedi proposta
| 0
|
4,630
| 17,035,217,137
|
IssuesEvent
|
2021-07-05 05:55:32
|
hackforla/website
|
https://api.github.com/repos/hackforla/website
|
closed
|
GitHub Actions: Switching update labels when update is added
|
Feature: Board/GitHub Maintenance Size: Large automation enhancement role: back end
|
### Overview
It is quite easy for our busy team to mislabel issues. For this action, we want the "To Update !" label to disappear when the assignee of an issue makes an update to their assigned issue, and add the "Status: Updated" label instead.
### Psudo-code
- [ ] Create a GitHub action that implements this behavior:
- [ ] Triggers when a new comment is made to an issue with the label "To Update !"
- [ ] Checks if the comment is made by the assignee.
- [ ] If so, removes the "To Update !" label and adds the "Status: Updated" label.
- [ ] Else, exit without problem.
### Checks
- [ ] Properly logs the code to aid in debugging for future programmers.
- [ ] Have comments explaining the reasoning behind opaque parts of your code
- [ ] Test what happens when:
- [ ] A comment is made when there _is no_ "To Update !" label (should do nothing)
- [ ] A comment is made by the assignee when there _is_ a "To Update !" label (should switch label)
- [ ] A comment is made by someone _other than the assignee_ when there is a "To Update !" label (should do nothing)
- [ ] A label is added (should do nothing)
- [ ] A label is removed (should do nothing)
### Resources/Instructions
Never done GitHub actions? [Start here!](https://docs.github.com/en/actions)
Note that you might want to do something outside the scope of the above psudo-code. If so, be sure to leave comments in your PR or this issue that justifies your reasoning. If you feel you need guidance, be sure to reach out! We cannot foresee whether this issue is solvable, or what hard decisions have to be made, but we would love to hear and help you!
#### Additional resources:
[Events that trigger workflows](https://docs.github.com/en/actions/reference/events-that-trigger-workflows)
[Workflow syntax for GitHub Actions](https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions)
[GitHub GraphQL](https://docs.github.com/en/graphql)
|
1.0
|
GitHub Actions: Switching update labels when update is added - ### Overview
It is quite easy for our busy team to mislabel issues. For this action, we want the "To Update !" label to disappear when the assignee of an issue makes an update to their assigned issue, and add the "Status: Updated" label instead.
### Psudo-code
- [ ] Create a GitHub action that implements this behavior:
- [ ] Triggers when a new comment is made to an issue with the label "To Update !"
- [ ] Checks if the comment is made by the assignee.
- [ ] If so, removes the "To Update !" label and adds the "Status: Updated" label.
- [ ] Else, exit without problem.
### Checks
- [ ] Properly logs the code to aid in debugging for future programmers.
- [ ] Have comments explaining the reasoning behind opaque parts of your code
- [ ] Test what happens when:
- [ ] A comment is made when there _is no_ "To Update !" label (should do nothing)
- [ ] A comment is made by the assignee when there _is_ a "To Update !" label (should switch label)
- [ ] A comment is made by someone _other than the assignee_ when there is a "To Update !" label (should do nothing)
- [ ] A label is added (should do nothing)
- [ ] A label is removed (should do nothing)
### Resources/Instructions
Never done GitHub actions? [Start here!](https://docs.github.com/en/actions)
Note that you might want to do something outside the scope of the above psudo-code. If so, be sure to leave comments in your PR or this issue that justifies your reasoning. If you feel you need guidance, be sure to reach out! We cannot foresee whether this issue is solvable, or what hard decisions have to be made, but we would love to hear and help you!
#### Additional resources:
[Events that trigger workflows](https://docs.github.com/en/actions/reference/events-that-trigger-workflows)
[Workflow syntax for GitHub Actions](https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions)
[GitHub GraphQL](https://docs.github.com/en/graphql)
|
non_comp
|
github actions switching update labels when update is added overview it is quite easy for our busy team to mislabel issues for this action we want the to update label to disappear when the assignee of an issue makes an update to their assigned issue and add the status updated label instead psudo code create a github action that implements this behavior triggers when a new comment is made to an issue with the label to update checks if the comment is made by the assignee if so removes the to update label and adds the status updated label else exit without problem checks properly logs the code to aid in debugging for future programmers have comments explaining the reasoning behind opaque parts of your code test what happens when a comment is made when there is no to update label should do nothing a comment is made by the assignee when there is a to update label should switch label a comment is made by someone other than the assignee when there is a to update label should do nothing a label is added should do nothing a label is removed should do nothing resources instructions never done github actions note that you might want to do something outside the scope of the above psudo code if so be sure to leave comments in your pr or this issue that justifies your reasoning if you feel you need guidance be sure to reach out we cannot foresee whether this issue is solvable or what hard decisions have to be made but we would love to hear and help you additional resources
| 0
|
4,528
| 7,188,628,418
|
IssuesEvent
|
2018-02-02 10:49:22
|
presscustomizr/customizr
|
https://api.github.com/repos/presscustomizr/customizr
|
closed
|
WooCommerce : breadcrumb issue
|
compatibility-issue enhancement
|
> I have one more problem with the breadcrumbs in woo commerce in association with customizr. With the original woo theme "storefront" the breadcrumbs works fine. But with customizr the breadcrumbs from the categorys, undercategorys and products are wrong.
>
> Is it possible to use the original storefront settings for my shop.
|
True
|
WooCommerce : breadcrumb issue - > I have one more problem with the breadcrumbs in woo commerce in association with customizr. With the original woo theme "storefront" the breadcrumbs works fine. But with customizr the breadcrumbs from the categorys, undercategorys and products are wrong.
>
> Is it possible to use the original storefront settings for my shop.
|
comp
|
woocommerce breadcrumb issue i have one more problem with the breadcrumbs in woo commerce in association with customizr with the original woo theme storefront the breadcrumbs works fine but with customizr the breadcrumbs from the categorys undercategorys and products are wrong is it possible to use the original storefront settings for my shop
| 1
|
4,367
| 7,063,937,734
|
IssuesEvent
|
2018-01-06 00:30:06
|
danielbachhuber/gutenberg-plugin-compatibility
|
https://api.github.com/repos/danielbachhuber/gutenberg-plugin-compatibility
|
opened
|
shortcodes-ultimate
|
incompatible:missing-features state:incompatible
|
[Shortcodes Ultimate](https://wordpress.org/plugins/shortcodes-ultimate/) adds a media button that inserts a shortcode into the editor:



Gutenberg doesn't have equivalent:

|
True
|
shortcodes-ultimate - [Shortcodes Ultimate](https://wordpress.org/plugins/shortcodes-ultimate/) adds a media button that inserts a shortcode into the editor:



Gutenberg doesn't have equivalent:

|
comp
|
shortcodes ultimate adds a media button that inserts a shortcode into the editor gutenberg doesn t have equivalent
| 1
|
11,200
| 13,204,992,188
|
IssuesEvent
|
2020-08-14 17:00:33
|
facebookincubator/facebook-for-woocommerce
|
https://api.github.com/repos/facebookincubator/facebook-for-woocommerce
|
closed
|
incompatibility issue with WooCommerce Membership: Incorrect price exported when membership plug is in use
|
compatibility feature-request skyverge-investigate up-for-grabs
|
When store uses Woocommerce Membership, the Facebook for Woocommerce plugin is taking the price for a product based on what the Admin User would pay as per the Woocommerce Membership plugin's discount methods.
Meaning that whichever call the Facebook for Woocommerce plugin is using to get the product price, it is also taking into account the other plugin's discounts for an admin user who is a member of a Woocommerce Membership Plan that has a discount available to it.
This means the price on the Facebook shop for the store's FB page is incorrect and when a customer would click on the checkout on store button the product will get added to their basket at the actual price and be confusing/frustrating to customer; potentially alienating that customer.
I have also contacted Sky Verge regarding this but they have obviously suggested I contact you and Woocommerce recommended I post this here.
|
True
|
incompatibility issue with WooCommerce Membership: Incorrect price exported when membership plug is in use - When store uses Woocommerce Membership, the Facebook for Woocommerce plugin is taking the price for a product based on what the Admin User would pay as per the Woocommerce Membership plugin's discount methods.
Meaning that whichever call the Facebook for Woocommerce plugin is using to get the product price, it is also taking into account the other plugin's discounts for an admin user who is a member of a Woocommerce Membership Plan that has a discount available to it.
This means the price on the Facebook shop for the store's FB page is incorrect and when a customer would click on the checkout on store button the product will get added to their basket at the actual price and be confusing/frustrating to customer; potentially alienating that customer.
I have also contacted Sky Verge regarding this but they have obviously suggested I contact you and Woocommerce recommended I post this here.
|
comp
|
incompatibility issue with woocommerce membership incorrect price exported when membership plug is in use when store uses woocommerce membership the facebook for woocommerce plugin is taking the price for a product based on what the admin user would pay as per the woocommerce membership plugin s discount methods meaning that whichever call the facebook for woocommerce plugin is using to get the product price it is also taking into account the other plugin s discounts for an admin user who is a member of a woocommerce membership plan that has a discount available to it this means the price on the facebook shop for the store s fb page is incorrect and when a customer would click on the checkout on store button the product will get added to their basket at the actual price and be confusing frustrating to customer potentially alienating that customer i have also contacted sky verge regarding this but they have obviously suggested i contact you and woocommerce recommended i post this here
| 1
|
17,846
| 24,624,333,465
|
IssuesEvent
|
2022-10-16 10:13:07
|
sekiguchi-nagisa/ydsh
|
https://api.github.com/repos/sekiguchi-nagisa/ydsh
|
closed
|
improve positional arguments handling
|
enhancement incompatible change Core
|
like zsh, does not restrict positional argument number to single digit
* allow up to ARRAY_SIZE_MAX + 1 positional arguments in `${N}` `$N`]
* if out-of range, report syntax error
* change positional arguments access with op call (as a result, always sync positional arguments with $@ content)
|
True
|
improve positional arguments handling - like zsh, does not restrict positional argument number to single digit
* allow up to ARRAY_SIZE_MAX + 1 positional arguments in `${N}` `$N`]
* if out-of range, report syntax error
* change positional arguments access with op call (as a result, always sync positional arguments with $@ content)
|
comp
|
improve positional arguments handling like zsh does not restrict positional argument number to single digit allow up to array size max positional arguments in n n if out of range report syntax error change positional arguments access with op call as a result always sync positional arguments with content
| 1
|
15,625
| 9,564,598,296
|
IssuesEvent
|
2019-05-05 05:21:50
|
scriptex/2048
|
https://api.github.com/repos/scriptex/2048
|
closed
|
CVE-2018-11697 (High) detected in node-sass-v4.11.0
|
security vulnerability
|
## CVE-2018-11697 - High Severity Vulnerability
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>node-sassv4.11.0</b></p></summary>
<p>
<p>:rainbow: Node.js bindings to libsass</p>
<p>Library home page: <a href=https://github.com/sass/node-sass.git>https://github.com/sass/node-sass.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/scriptex/2048/commit/b58bbe703411c8203ad9e68496cd450c7f2ea208">b58bbe703411c8203ad9e68496cd450c7f2ea208</a></p>
</p>
</details>
</p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/vulnerability_details.png' width=19 height=20> Library Source Files (125)</summary>
<p></p>
<p> * The source files were matched to this source library based on a best effort match. Source libraries are selected from a list of probable public libraries.</p>
<p>
- /2048/node_modules/node-sass/src/libsass/src/expand.hpp
- /2048/node_modules/node-sass/src/libsass/src/color_maps.cpp
- /2048/node_modules/node-sass/src/libsass/src/sass_util.hpp
- /2048/node_modules/node-sass/src/libsass/src/utf8/unchecked.h
- /2048/node_modules/node-sass/src/libsass/src/output.hpp
- /2048/node_modules/node-sass/src/libsass/src/sass_values.hpp
- /2048/node_modules/node-sass/src/libsass/src/util.hpp
- /2048/node_modules/node-sass/src/libsass/src/emitter.hpp
- /2048/node_modules/node-sass/src/libsass/src/lexer.cpp
- /2048/node_modules/node-sass/src/libsass/test/test_node.cpp
- /2048/node_modules/node-sass/src/libsass/src/plugins.cpp
- /2048/node_modules/node-sass/src/libsass/include/sass/base.h
- /2048/node_modules/node-sass/src/libsass/src/position.hpp
- /2048/node_modules/node-sass/src/libsass/src/subset_map.hpp
- /2048/node_modules/node-sass/src/libsass/src/operation.hpp
- /2048/node_modules/node-sass/src/libsass/src/remove_placeholders.cpp
- /2048/node_modules/node-sass/src/libsass/src/error_handling.hpp
- /2048/node_modules/node-sass/src/custom_importer_bridge.cpp
- /2048/node_modules/node-sass/src/libsass/contrib/plugin.cpp
- /2048/node_modules/node-sass/src/libsass/src/functions.hpp
- /2048/node_modules/node-sass/src/libsass/test/test_superselector.cpp
- /2048/node_modules/node-sass/src/libsass/src/eval.hpp
- /2048/node_modules/node-sass/src/libsass/src/utf8_string.hpp
- /2048/node_modules/node-sass/src/libsass/src/error_handling.cpp
- /2048/node_modules/node-sass/src/sass_context_wrapper.h
- /2048/node_modules/node-sass/src/libsass/src/node.cpp
- /2048/node_modules/node-sass/src/libsass/src/parser.cpp
- /2048/node_modules/node-sass/src/libsass/src/subset_map.cpp
- /2048/node_modules/node-sass/src/libsass/src/emitter.cpp
- /2048/node_modules/node-sass/src/libsass/src/listize.cpp
- /2048/node_modules/node-sass/src/libsass/src/ast.hpp
- /2048/node_modules/node-sass/src/libsass/src/sass_functions.hpp
- /2048/node_modules/node-sass/src/libsass/src/memory/SharedPtr.cpp
- /2048/node_modules/node-sass/src/libsass/src/output.cpp
- /2048/node_modules/node-sass/src/libsass/src/check_nesting.cpp
- /2048/node_modules/node-sass/src/libsass/src/ast_def_macros.hpp
- /2048/node_modules/node-sass/src/libsass/src/cssize.hpp
- /2048/node_modules/node-sass/src/libsass/src/functions.cpp
- /2048/node_modules/node-sass/src/libsass/src/paths.hpp
- /2048/node_modules/node-sass/src/libsass/src/prelexer.cpp
- /2048/node_modules/node-sass/src/libsass/src/ast_fwd_decl.hpp
- /2048/node_modules/node-sass/src/sass_types/color.cpp
- /2048/node_modules/node-sass/src/libsass/test/test_unification.cpp
- /2048/node_modules/node-sass/src/libsass/src/inspect.hpp
- /2048/node_modules/node-sass/src/libsass/src/values.cpp
- /2048/node_modules/node-sass/src/libsass/src/sass_util.cpp
- /2048/node_modules/node-sass/src/libsass/src/source_map.hpp
- /2048/node_modules/node-sass/src/sass_types/list.h
- /2048/node_modules/node-sass/src/libsass/src/json.cpp
- /2048/node_modules/node-sass/src/libsass/src/check_nesting.hpp
- /2048/node_modules/node-sass/src/libsass/src/units.cpp
- /2048/node_modules/node-sass/src/libsass/src/units.hpp
- /2048/node_modules/node-sass/src/libsass/src/context.cpp
- /2048/node_modules/node-sass/src/libsass/src/utf8/checked.h
- /2048/node_modules/node-sass/src/libsass/src/listize.hpp
- /2048/node_modules/node-sass/src/sass_types/string.cpp
- /2048/node_modules/node-sass/src/libsass/src/context.hpp
- /2048/node_modules/node-sass/src/libsass/src/prelexer.hpp
- /2048/node_modules/node-sass/src/sass_types/boolean.h
- /2048/node_modules/node-sass/src/libsass/include/sass2scss.h
- /2048/node_modules/node-sass/src/libsass/src/eval.cpp
- /2048/node_modules/node-sass/src/libsass/src/expand.cpp
- /2048/node_modules/node-sass/src/libsass/src/operators.cpp
- /2048/node_modules/node-sass/src/sass_types/factory.cpp
- /2048/node_modules/node-sass/src/sass_types/boolean.cpp
- /2048/node_modules/node-sass/src/libsass/src/source_map.cpp
- /2048/node_modules/node-sass/src/sass_types/value.h
- /2048/node_modules/node-sass/src/libsass/src/utf8_string.cpp
- /2048/node_modules/node-sass/src/callback_bridge.h
- /2048/node_modules/node-sass/src/libsass/src/file.cpp
- /2048/node_modules/node-sass/src/libsass/src/sass.cpp
- /2048/node_modules/node-sass/src/libsass/src/node.hpp
- /2048/node_modules/node-sass/src/libsass/src/environment.cpp
- /2048/node_modules/node-sass/src/libsass/src/extend.hpp
- /2048/node_modules/node-sass/src/libsass/src/sass_context.hpp
- /2048/node_modules/node-sass/src/libsass/src/operators.hpp
- /2048/node_modules/node-sass/src/libsass/src/constants.hpp
- /2048/node_modules/node-sass/src/libsass/src/sass.hpp
- /2048/node_modules/node-sass/src/libsass/src/ast_fwd_decl.cpp
- /2048/node_modules/node-sass/src/libsass/src/parser.hpp
- /2048/node_modules/node-sass/src/libsass/src/constants.cpp
- /2048/node_modules/node-sass/src/sass_types/list.cpp
- /2048/node_modules/node-sass/src/libsass/src/cssize.cpp
- /2048/node_modules/node-sass/src/libsass/include/sass/functions.h
- /2048/node_modules/node-sass/src/libsass/src/util.cpp
- /2048/node_modules/node-sass/src/custom_function_bridge.cpp
- /2048/node_modules/node-sass/src/custom_importer_bridge.h
- /2048/node_modules/node-sass/src/libsass/src/bind.cpp
- /2048/node_modules/node-sass/src/libsass/src/inspect.cpp
- /2048/node_modules/node-sass/src/libsass/src/sass_functions.cpp
- /2048/node_modules/node-sass/src/libsass/src/backtrace.cpp
- /2048/node_modules/node-sass/src/libsass/src/extend.cpp
- /2048/node_modules/node-sass/src/sass_types/sass_value_wrapper.h
- /2048/node_modules/node-sass/src/libsass/src/debugger.hpp
- /2048/node_modules/node-sass/src/libsass/src/cencode.c
- /2048/node_modules/node-sass/src/libsass/src/base64vlq.cpp
- /2048/node_modules/node-sass/src/sass_types/number.cpp
- /2048/node_modules/node-sass/src/sass_types/color.h
- /2048/node_modules/node-sass/src/libsass/src/c99func.c
- /2048/node_modules/node-sass/src/libsass/src/position.cpp
- /2048/node_modules/node-sass/src/libsass/src/remove_placeholders.hpp
- /2048/node_modules/node-sass/src/libsass/src/sass_values.cpp
- /2048/node_modules/node-sass/src/libsass/include/sass/values.h
- /2048/node_modules/node-sass/src/libsass/test/test_subset_map.cpp
- /2048/node_modules/node-sass/src/libsass/src/sass2scss.cpp
- /2048/node_modules/node-sass/src/sass_types/null.cpp
- /2048/node_modules/node-sass/src/libsass/include/sass/context.h
- /2048/node_modules/node-sass/src/libsass/src/ast.cpp
- /2048/node_modules/node-sass/src/libsass/src/to_c.cpp
- /2048/node_modules/node-sass/src/libsass/src/to_value.hpp
- /2048/node_modules/node-sass/src/libsass/src/color_maps.hpp
- /2048/node_modules/node-sass/src/sass_context_wrapper.cpp
- /2048/node_modules/node-sass/src/libsass/script/test-leaks.pl
- /2048/node_modules/node-sass/src/libsass/src/lexer.hpp
- /2048/node_modules/node-sass/src/libsass/src/memory/SharedPtr.hpp
- /2048/node_modules/node-sass/src/libsass/src/to_c.hpp
- /2048/node_modules/node-sass/src/libsass/src/to_value.cpp
- /2048/node_modules/node-sass/src/libsass/src/b64/encode.h
- /2048/node_modules/node-sass/src/libsass/src/file.hpp
- /2048/node_modules/node-sass/src/sass_types/map.cpp
- /2048/node_modules/node-sass/src/libsass/src/environment.hpp
- /2048/node_modules/node-sass/src/libsass/src/plugins.hpp
- /2048/node_modules/node-sass/src/binding.cpp
- /2048/node_modules/node-sass/src/libsass/src/sass_context.cpp
- /2048/node_modules/node-sass/src/libsass/src/debug.hpp
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in LibSass through 3.5.4. An out-of-bounds read of a memory region was found in the function Sass::Prelexer::exactly() which could be leveraged by an attacker to disclose information or manipulated to read from unmapped memory causing a denial of service.
<p>Publish Date: 2018-06-04
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-11697>CVE-2018-11697</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2018-11697 (High) detected in node-sass-v4.11.0 - ## CVE-2018-11697 - High Severity Vulnerability
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>node-sassv4.11.0</b></p></summary>
<p>
<p>:rainbow: Node.js bindings to libsass</p>
<p>Library home page: <a href=https://github.com/sass/node-sass.git>https://github.com/sass/node-sass.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/scriptex/2048/commit/b58bbe703411c8203ad9e68496cd450c7f2ea208">b58bbe703411c8203ad9e68496cd450c7f2ea208</a></p>
</p>
</details>
</p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/vulnerability_details.png' width=19 height=20> Library Source Files (125)</summary>
<p></p>
<p> * The source files were matched to this source library based on a best effort match. Source libraries are selected from a list of probable public libraries.</p>
<p>
- /2048/node_modules/node-sass/src/libsass/src/expand.hpp
- /2048/node_modules/node-sass/src/libsass/src/color_maps.cpp
- /2048/node_modules/node-sass/src/libsass/src/sass_util.hpp
- /2048/node_modules/node-sass/src/libsass/src/utf8/unchecked.h
- /2048/node_modules/node-sass/src/libsass/src/output.hpp
- /2048/node_modules/node-sass/src/libsass/src/sass_values.hpp
- /2048/node_modules/node-sass/src/libsass/src/util.hpp
- /2048/node_modules/node-sass/src/libsass/src/emitter.hpp
- /2048/node_modules/node-sass/src/libsass/src/lexer.cpp
- /2048/node_modules/node-sass/src/libsass/test/test_node.cpp
- /2048/node_modules/node-sass/src/libsass/src/plugins.cpp
- /2048/node_modules/node-sass/src/libsass/include/sass/base.h
- /2048/node_modules/node-sass/src/libsass/src/position.hpp
- /2048/node_modules/node-sass/src/libsass/src/subset_map.hpp
- /2048/node_modules/node-sass/src/libsass/src/operation.hpp
- /2048/node_modules/node-sass/src/libsass/src/remove_placeholders.cpp
- /2048/node_modules/node-sass/src/libsass/src/error_handling.hpp
- /2048/node_modules/node-sass/src/custom_importer_bridge.cpp
- /2048/node_modules/node-sass/src/libsass/contrib/plugin.cpp
- /2048/node_modules/node-sass/src/libsass/src/functions.hpp
- /2048/node_modules/node-sass/src/libsass/test/test_superselector.cpp
- /2048/node_modules/node-sass/src/libsass/src/eval.hpp
- /2048/node_modules/node-sass/src/libsass/src/utf8_string.hpp
- /2048/node_modules/node-sass/src/libsass/src/error_handling.cpp
- /2048/node_modules/node-sass/src/sass_context_wrapper.h
- /2048/node_modules/node-sass/src/libsass/src/node.cpp
- /2048/node_modules/node-sass/src/libsass/src/parser.cpp
- /2048/node_modules/node-sass/src/libsass/src/subset_map.cpp
- /2048/node_modules/node-sass/src/libsass/src/emitter.cpp
- /2048/node_modules/node-sass/src/libsass/src/listize.cpp
- /2048/node_modules/node-sass/src/libsass/src/ast.hpp
- /2048/node_modules/node-sass/src/libsass/src/sass_functions.hpp
- /2048/node_modules/node-sass/src/libsass/src/memory/SharedPtr.cpp
- /2048/node_modules/node-sass/src/libsass/src/output.cpp
- /2048/node_modules/node-sass/src/libsass/src/check_nesting.cpp
- /2048/node_modules/node-sass/src/libsass/src/ast_def_macros.hpp
- /2048/node_modules/node-sass/src/libsass/src/cssize.hpp
- /2048/node_modules/node-sass/src/libsass/src/functions.cpp
- /2048/node_modules/node-sass/src/libsass/src/paths.hpp
- /2048/node_modules/node-sass/src/libsass/src/prelexer.cpp
- /2048/node_modules/node-sass/src/libsass/src/ast_fwd_decl.hpp
- /2048/node_modules/node-sass/src/sass_types/color.cpp
- /2048/node_modules/node-sass/src/libsass/test/test_unification.cpp
- /2048/node_modules/node-sass/src/libsass/src/inspect.hpp
- /2048/node_modules/node-sass/src/libsass/src/values.cpp
- /2048/node_modules/node-sass/src/libsass/src/sass_util.cpp
- /2048/node_modules/node-sass/src/libsass/src/source_map.hpp
- /2048/node_modules/node-sass/src/sass_types/list.h
- /2048/node_modules/node-sass/src/libsass/src/json.cpp
- /2048/node_modules/node-sass/src/libsass/src/check_nesting.hpp
- /2048/node_modules/node-sass/src/libsass/src/units.cpp
- /2048/node_modules/node-sass/src/libsass/src/units.hpp
- /2048/node_modules/node-sass/src/libsass/src/context.cpp
- /2048/node_modules/node-sass/src/libsass/src/utf8/checked.h
- /2048/node_modules/node-sass/src/libsass/src/listize.hpp
- /2048/node_modules/node-sass/src/sass_types/string.cpp
- /2048/node_modules/node-sass/src/libsass/src/context.hpp
- /2048/node_modules/node-sass/src/libsass/src/prelexer.hpp
- /2048/node_modules/node-sass/src/sass_types/boolean.h
- /2048/node_modules/node-sass/src/libsass/include/sass2scss.h
- /2048/node_modules/node-sass/src/libsass/src/eval.cpp
- /2048/node_modules/node-sass/src/libsass/src/expand.cpp
- /2048/node_modules/node-sass/src/libsass/src/operators.cpp
- /2048/node_modules/node-sass/src/sass_types/factory.cpp
- /2048/node_modules/node-sass/src/sass_types/boolean.cpp
- /2048/node_modules/node-sass/src/libsass/src/source_map.cpp
- /2048/node_modules/node-sass/src/sass_types/value.h
- /2048/node_modules/node-sass/src/libsass/src/utf8_string.cpp
- /2048/node_modules/node-sass/src/callback_bridge.h
- /2048/node_modules/node-sass/src/libsass/src/file.cpp
- /2048/node_modules/node-sass/src/libsass/src/sass.cpp
- /2048/node_modules/node-sass/src/libsass/src/node.hpp
- /2048/node_modules/node-sass/src/libsass/src/environment.cpp
- /2048/node_modules/node-sass/src/libsass/src/extend.hpp
- /2048/node_modules/node-sass/src/libsass/src/sass_context.hpp
- /2048/node_modules/node-sass/src/libsass/src/operators.hpp
- /2048/node_modules/node-sass/src/libsass/src/constants.hpp
- /2048/node_modules/node-sass/src/libsass/src/sass.hpp
- /2048/node_modules/node-sass/src/libsass/src/ast_fwd_decl.cpp
- /2048/node_modules/node-sass/src/libsass/src/parser.hpp
- /2048/node_modules/node-sass/src/libsass/src/constants.cpp
- /2048/node_modules/node-sass/src/sass_types/list.cpp
- /2048/node_modules/node-sass/src/libsass/src/cssize.cpp
- /2048/node_modules/node-sass/src/libsass/include/sass/functions.h
- /2048/node_modules/node-sass/src/libsass/src/util.cpp
- /2048/node_modules/node-sass/src/custom_function_bridge.cpp
- /2048/node_modules/node-sass/src/custom_importer_bridge.h
- /2048/node_modules/node-sass/src/libsass/src/bind.cpp
- /2048/node_modules/node-sass/src/libsass/src/inspect.cpp
- /2048/node_modules/node-sass/src/libsass/src/sass_functions.cpp
- /2048/node_modules/node-sass/src/libsass/src/backtrace.cpp
- /2048/node_modules/node-sass/src/libsass/src/extend.cpp
- /2048/node_modules/node-sass/src/sass_types/sass_value_wrapper.h
- /2048/node_modules/node-sass/src/libsass/src/debugger.hpp
- /2048/node_modules/node-sass/src/libsass/src/cencode.c
- /2048/node_modules/node-sass/src/libsass/src/base64vlq.cpp
- /2048/node_modules/node-sass/src/sass_types/number.cpp
- /2048/node_modules/node-sass/src/sass_types/color.h
- /2048/node_modules/node-sass/src/libsass/src/c99func.c
- /2048/node_modules/node-sass/src/libsass/src/position.cpp
- /2048/node_modules/node-sass/src/libsass/src/remove_placeholders.hpp
- /2048/node_modules/node-sass/src/libsass/src/sass_values.cpp
- /2048/node_modules/node-sass/src/libsass/include/sass/values.h
- /2048/node_modules/node-sass/src/libsass/test/test_subset_map.cpp
- /2048/node_modules/node-sass/src/libsass/src/sass2scss.cpp
- /2048/node_modules/node-sass/src/sass_types/null.cpp
- /2048/node_modules/node-sass/src/libsass/include/sass/context.h
- /2048/node_modules/node-sass/src/libsass/src/ast.cpp
- /2048/node_modules/node-sass/src/libsass/src/to_c.cpp
- /2048/node_modules/node-sass/src/libsass/src/to_value.hpp
- /2048/node_modules/node-sass/src/libsass/src/color_maps.hpp
- /2048/node_modules/node-sass/src/sass_context_wrapper.cpp
- /2048/node_modules/node-sass/src/libsass/script/test-leaks.pl
- /2048/node_modules/node-sass/src/libsass/src/lexer.hpp
- /2048/node_modules/node-sass/src/libsass/src/memory/SharedPtr.hpp
- /2048/node_modules/node-sass/src/libsass/src/to_c.hpp
- /2048/node_modules/node-sass/src/libsass/src/to_value.cpp
- /2048/node_modules/node-sass/src/libsass/src/b64/encode.h
- /2048/node_modules/node-sass/src/libsass/src/file.hpp
- /2048/node_modules/node-sass/src/sass_types/map.cpp
- /2048/node_modules/node-sass/src/libsass/src/environment.hpp
- /2048/node_modules/node-sass/src/libsass/src/plugins.hpp
- /2048/node_modules/node-sass/src/binding.cpp
- /2048/node_modules/node-sass/src/libsass/src/sass_context.cpp
- /2048/node_modules/node-sass/src/libsass/src/debug.hpp
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in LibSass through 3.5.4. An out-of-bounds read of a memory region was found in the function Sass::Prelexer::exactly() which could be leveraged by an attacker to disclose information or manipulated to read from unmapped memory causing a denial of service.
<p>Publish Date: 2018-06-04
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-11697>CVE-2018-11697</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_comp
|
cve high detected in node sass cve high severity vulnerability vulnerable library node rainbow node js bindings to libsass library home page a href found in head commit a href library source files the source files were matched to this source library based on a best effort match source libraries are selected from a list of probable public libraries node modules node sass src libsass src expand hpp node modules node sass src libsass src color maps cpp node modules node sass src libsass src sass util hpp node modules node sass src libsass src unchecked h node modules node sass src libsass src output hpp node modules node sass src libsass src sass values hpp node modules node sass src libsass src util hpp node modules node sass src libsass src emitter hpp node modules node sass src libsass src lexer cpp node modules node sass src libsass test test node cpp node modules node sass src libsass src plugins cpp node modules node sass src libsass include sass base h node modules node sass src libsass src position hpp node modules node sass src libsass src subset map hpp node modules node sass src libsass src operation hpp node modules node sass src libsass src remove placeholders cpp node modules node sass src libsass src error handling hpp node modules node sass src custom importer bridge cpp node modules node sass src libsass contrib plugin cpp node modules node sass src libsass src functions hpp node modules node sass src libsass test test superselector cpp node modules node sass src libsass src eval hpp node modules node sass src libsass src string hpp node modules node sass src libsass src error handling cpp node modules node sass src sass context wrapper h node modules node sass src libsass src node cpp node modules node sass src libsass src parser cpp node modules node sass src libsass src subset map cpp node modules node sass src libsass src emitter cpp node modules node sass src libsass src listize cpp node modules node sass src libsass src ast hpp node modules node sass src libsass src sass functions hpp node modules node sass src libsass src memory sharedptr cpp node modules node sass src libsass src output cpp node modules node sass src libsass src check nesting cpp node modules node sass src libsass src ast def macros hpp node modules node sass src libsass src cssize hpp node modules node sass src libsass src functions cpp node modules node sass src libsass src paths hpp node modules node sass src libsass src prelexer cpp node modules node sass src libsass src ast fwd decl hpp node modules node sass src sass types color cpp node modules node sass src libsass test test unification cpp node modules node sass src libsass src inspect hpp node modules node sass src libsass src values cpp node modules node sass src libsass src sass util cpp node modules node sass src libsass src source map hpp node modules node sass src sass types list h node modules node sass src libsass src json cpp node modules node sass src libsass src check nesting hpp node modules node sass src libsass src units cpp node modules node sass src libsass src units hpp node modules node sass src libsass src context cpp node modules node sass src libsass src checked h node modules node sass src libsass src listize hpp node modules node sass src sass types string cpp node modules node sass src libsass src context hpp node modules node sass src libsass src prelexer hpp node modules node sass src sass types boolean h node modules node sass src libsass include h node modules node sass src libsass src eval cpp node modules node sass src libsass src expand cpp node modules node sass src libsass src operators cpp node modules node sass src sass types factory cpp node modules node sass src sass types boolean cpp node modules node sass src libsass src source map cpp node modules node sass src sass types value h node modules node sass src libsass src string cpp node modules node sass src callback bridge h node modules node sass src libsass src file cpp node modules node sass src libsass src sass cpp node modules node sass src libsass src node hpp node modules node sass src libsass src environment cpp node modules node sass src libsass src extend hpp node modules node sass src libsass src sass context hpp node modules node sass src libsass src operators hpp node modules node sass src libsass src constants hpp node modules node sass src libsass src sass hpp node modules node sass src libsass src ast fwd decl cpp node modules node sass src libsass src parser hpp node modules node sass src libsass src constants cpp node modules node sass src sass types list cpp node modules node sass src libsass src cssize cpp node modules node sass src libsass include sass functions h node modules node sass src libsass src util cpp node modules node sass src custom function bridge cpp node modules node sass src custom importer bridge h node modules node sass src libsass src bind cpp node modules node sass src libsass src inspect cpp node modules node sass src libsass src sass functions cpp node modules node sass src libsass src backtrace cpp node modules node sass src libsass src extend cpp node modules node sass src sass types sass value wrapper h node modules node sass src libsass src debugger hpp node modules node sass src libsass src cencode c node modules node sass src libsass src cpp node modules node sass src sass types number cpp node modules node sass src sass types color h node modules node sass src libsass src c node modules node sass src libsass src position cpp node modules node sass src libsass src remove placeholders hpp node modules node sass src libsass src sass values cpp node modules node sass src libsass include sass values h node modules node sass src libsass test test subset map cpp node modules node sass src libsass src cpp node modules node sass src sass types null cpp node modules node sass src libsass include sass context h node modules node sass src libsass src ast cpp node modules node sass src libsass src to c cpp node modules node sass src libsass src to value hpp node modules node sass src libsass src color maps hpp node modules node sass src sass context wrapper cpp node modules node sass src libsass script test leaks pl node modules node sass src libsass src lexer hpp node modules node sass src libsass src memory sharedptr hpp node modules node sass src libsass src to c hpp node modules node sass src libsass src to value cpp node modules node sass src libsass src encode h node modules node sass src libsass src file hpp node modules node sass src sass types map cpp node modules node sass src libsass src environment hpp node modules node sass src libsass src plugins hpp node modules node sass src binding cpp node modules node sass src libsass src sass context cpp node modules node sass src libsass src debug hpp vulnerability details an issue was discovered in libsass through an out of bounds read of a memory region was found in the function sass prelexer exactly which could be leveraged by an attacker to disclose information or manipulated to read from unmapped memory causing a denial of service publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact none availability impact high for more information on scores click a href step up your open source security game with whitesource
| 0
|
30,093
| 6,020,417,046
|
IssuesEvent
|
2017-06-07 16:24:06
|
primefaces/primefaces
|
https://api.github.com/repos/primefaces/primefaces
|
closed
|
Week format is not working correctly on Schedule
|
6.1.3 defect
|
with FullCalendar v2.9.1
Test code;
```
columnFormat="week:'dd DD MMM'"
```
|
1.0
|
Week format is not working correctly on Schedule - with FullCalendar v2.9.1
Test code;
```
columnFormat="week:'dd DD MMM'"
```
|
non_comp
|
week format is not working correctly on schedule with fullcalendar test code columnformat week dd dd mmm
| 0
|
87,564
| 10,927,381,446
|
IssuesEvent
|
2019-11-22 16:34:16
|
alice-i-cecile/Fonts-of-Power
|
https://api.github.com/repos/alice-i-cecile/Fonts-of-Power
|
closed
|
Decide on full list of damage types
|
design polish
|
Currently:
- elemental: fire / lightning / cold
- balance druid: arcane / primal
- diablo: radiant / necrotic
- boring: physical
|
1.0
|
Decide on full list of damage types - Currently:
- elemental: fire / lightning / cold
- balance druid: arcane / primal
- diablo: radiant / necrotic
- boring: physical
|
non_comp
|
decide on full list of damage types currently elemental fire lightning cold balance druid arcane primal diablo radiant necrotic boring physical
| 0
|
758,251
| 26,547,376,448
|
IssuesEvent
|
2023-01-20 02:19:57
|
flipt-io/flipt
|
https://api.github.com/repos/flipt-io/flipt
|
closed
|
[FLI-23] Support Memcached
|
enhancement stale Low priority
|
Now that Redis has been rolled out, it should be relatively simple to support another cache tech like Memcached
### TODO
* [ ] Add memcached as a supported caching protocol
* [ ] Support configuring memcached in the config.yml
* [ ] Add example on how to run Flipt with Memcached [Example with Docker Compose](https://gist.github.com/dbist/ebb1f39f580ad9d07c04c3a3377e2bff)
<sub>From [SyncLinear.com](https://synclinear.com) | [FLI-23](https://linear.app/flipt/issue/FLI-23/support-memcached)</sub>
|
1.0
|
[FLI-23] Support Memcached - Now that Redis has been rolled out, it should be relatively simple to support another cache tech like Memcached
### TODO
* [ ] Add memcached as a supported caching protocol
* [ ] Support configuring memcached in the config.yml
* [ ] Add example on how to run Flipt with Memcached [Example with Docker Compose](https://gist.github.com/dbist/ebb1f39f580ad9d07c04c3a3377e2bff)
<sub>From [SyncLinear.com](https://synclinear.com) | [FLI-23](https://linear.app/flipt/issue/FLI-23/support-memcached)</sub>
|
non_comp
|
support memcached now that redis has been rolled out it should be relatively simple to support another cache tech like memcached todo add memcached as a supported caching protocol support configuring memcached in the config yml add example on how to run flipt with memcached from
| 0
|
14,698
| 18,047,561,198
|
IssuesEvent
|
2021-09-19 06:35:10
|
PolyhedralDev/Terra
|
https://api.github.com/repos/PolyhedralDev/Terra
|
closed
|
[1.16.5-Forge][Terra-5.3.3-BETA] Crash at game startup with Repurposed Structures and Terraforged on
|
Type: Bug Subsystem: External API Platform: Forge Type: Compatibility
|
Using these mods with Forge 36.2.2:
repurposed_structures_forge-3.2.2+1.16.5
Terra-forge-5.3.3-BETA+ec3b0e5d
TerraForged-1.16.5-0.2.14
The game appears to crash at mod startup with this as the latest.log: https://paste.ee/p/ynhH9
I modified Blame to print out the Structure.STEP map as the error is occuring on it: https://paste.ee/p/ABSLH
From what I can tell, Terra is trying to shove Repurposed Structures's structures into a biome builder at mod init. However, the biome builder crashes when being built because my structures didn't have their generation steps set yet (I set it in FMLCommonSetupEvent and a few other mods do so too).
This seems to happen with Terra's compat code for Terraforged being ran at modinit. Since I only add my structures with BiomeLoadingEvent to all biomes and provide configs to control which biome they can go in, I think it would be best for Terra to skip my structures entirely (a simple check for my modid when iterating over structures would be enough). That way, my own biome allow/disallow list config for my structures will work properly and will automatically be added to Terra's biomes by my own code anyway (unless users disallow the biome by my config).
It is a strange interaction but hopefully this make sense! I could try and add the structures to the STEP map earlier but that won't resolve the issue of Terra bypassing my configs and could lead to my structures being added multiple times to one Terra biome which could have very strange results (added once by Terra and then again by my BiomeLoadingEvent)
|
True
|
[1.16.5-Forge][Terra-5.3.3-BETA] Crash at game startup with Repurposed Structures and Terraforged on - Using these mods with Forge 36.2.2:
repurposed_structures_forge-3.2.2+1.16.5
Terra-forge-5.3.3-BETA+ec3b0e5d
TerraForged-1.16.5-0.2.14
The game appears to crash at mod startup with this as the latest.log: https://paste.ee/p/ynhH9
I modified Blame to print out the Structure.STEP map as the error is occuring on it: https://paste.ee/p/ABSLH
From what I can tell, Terra is trying to shove Repurposed Structures's structures into a biome builder at mod init. However, the biome builder crashes when being built because my structures didn't have their generation steps set yet (I set it in FMLCommonSetupEvent and a few other mods do so too).
This seems to happen with Terra's compat code for Terraforged being ran at modinit. Since I only add my structures with BiomeLoadingEvent to all biomes and provide configs to control which biome they can go in, I think it would be best for Terra to skip my structures entirely (a simple check for my modid when iterating over structures would be enough). That way, my own biome allow/disallow list config for my structures will work properly and will automatically be added to Terra's biomes by my own code anyway (unless users disallow the biome by my config).
It is a strange interaction but hopefully this make sense! I could try and add the structures to the STEP map earlier but that won't resolve the issue of Terra bypassing my configs and could lead to my structures being added multiple times to one Terra biome which could have very strange results (added once by Terra and then again by my BiomeLoadingEvent)
|
comp
|
crash at game startup with repurposed structures and terraforged on using these mods with forge repurposed structures forge terra forge beta terraforged the game appears to crash at mod startup with this as the latest log i modified blame to print out the structure step map as the error is occuring on it from what i can tell terra is trying to shove repurposed structures s structures into a biome builder at mod init however the biome builder crashes when being built because my structures didn t have their generation steps set yet i set it in fmlcommonsetupevent and a few other mods do so too this seems to happen with terra s compat code for terraforged being ran at modinit since i only add my structures with biomeloadingevent to all biomes and provide configs to control which biome they can go in i think it would be best for terra to skip my structures entirely a simple check for my modid when iterating over structures would be enough that way my own biome allow disallow list config for my structures will work properly and will automatically be added to terra s biomes by my own code anyway unless users disallow the biome by my config it is a strange interaction but hopefully this make sense i could try and add the structures to the step map earlier but that won t resolve the issue of terra bypassing my configs and could lead to my structures being added multiple times to one terra biome which could have very strange results added once by terra and then again by my biomeloadingevent
| 1
|
5,165
| 3,517,946,533
|
IssuesEvent
|
2016-01-12 10:25:56
|
urho3d/Urho3D
|
https://api.github.com/repos/urho3d/Urho3D
|
opened
|
Improve build system to generate better Urho3D.pc file
|
build system enhancement
|
Currently the generated Urho3D.pc file is not entirely correct. There are at least two things we could improve.
1. It may still erroneously expose some of the compiler defines that are really required only when building the library and not when using the library.
2. The linker flags are erroneously prepared as if user would always want to link against the library statically, i.e. we actually need less linker flags when linking dynamically.
|
1.0
|
Improve build system to generate better Urho3D.pc file - Currently the generated Urho3D.pc file is not entirely correct. There are at least two things we could improve.
1. It may still erroneously expose some of the compiler defines that are really required only when building the library and not when using the library.
2. The linker flags are erroneously prepared as if user would always want to link against the library statically, i.e. we actually need less linker flags when linking dynamically.
|
non_comp
|
improve build system to generate better pc file currently the generated pc file is not entirely correct there are at least two things we could improve it may still erroneously expose some of the compiler defines that are really required only when building the library and not when using the library the linker flags are erroneously prepared as if user would always want to link against the library statically i e we actually need less linker flags when linking dynamically
| 0
|
28,584
| 8,181,136,190
|
IssuesEvent
|
2018-08-28 21:46:51
|
MyCryptoHQ/MyCrypto
|
https://api.github.com/repos/MyCryptoHQ/MyCrypto
|
closed
|
`lint-staged` precommit adds un-staged files to commit
|
Added to Asana S build / ci issue
|
Sometimes I make temporary changes that I don't want to be committed in my commit. Unfortunately, the prettier precommit hook that runs:
```
"lint-staged": {
"*.{ts,tsx}": [
"prettier --write --single-quote",
"git add"
]
}
```
Adds and commits them, even when they weren't staged. This command should be more sophisticated to only run prettier on and re-add files that were staged for commit.
|
1.0
|
`lint-staged` precommit adds un-staged files to commit - Sometimes I make temporary changes that I don't want to be committed in my commit. Unfortunately, the prettier precommit hook that runs:
```
"lint-staged": {
"*.{ts,tsx}": [
"prettier --write --single-quote",
"git add"
]
}
```
Adds and commits them, even when they weren't staged. This command should be more sophisticated to only run prettier on and re-add files that were staged for commit.
|
non_comp
|
lint staged precommit adds un staged files to commit sometimes i make temporary changes that i don t want to be committed in my commit unfortunately the prettier precommit hook that runs lint staged ts tsx prettier write single quote git add adds and commits them even when they weren t staged this command should be more sophisticated to only run prettier on and re add files that were staged for commit
| 0
|
121
| 2,582,695,675
|
IssuesEvent
|
2015-02-15 15:13:07
|
STEllAR-GROUP/hpx
|
https://api.github.com/repos/STEllAR-GROUP/hpx
|
closed
|
Warning on uninitialized member
|
category: core compiler: clang type: compatibility issue
|
Clang trunk (3.7) is giving the following warning for any code that I try using HPX:
```
/opt/local/include/hpx/util/coroutine/detail/context_generic_context.hpp:188:32:
warning: field 'alloc_' is uninitialized when used here
[-Wuninitialized]
, stack_pointer_(alloc_.allocate(stack_size_))
^
1 warning generated.
```
I've checked code at [line 188 of context_generic_context.hpp](https://github.com/STEllAR-GROUP/hpx/blob/master/hpx/util/coroutine/detail/context_generic_context.hpp#L188) and, even though it's not going to produce any effect (since the `allocate` member function doesn't use any state of its class for my build) it does look like a case of pre-c++11 rules where POD members being default-initialized (not present in the initializer list) are not initialized at all.
I got dozens of such warnings while building HPX (I'm on OS X using clang trunk), all of them pointing the same line, I didn't care but then I noticed it would show up anytime when including HPX headers.
I'm on 0.9.9, but I guess the issue still applies on master.
|
True
|
Warning on uninitialized member - Clang trunk (3.7) is giving the following warning for any code that I try using HPX:
```
/opt/local/include/hpx/util/coroutine/detail/context_generic_context.hpp:188:32:
warning: field 'alloc_' is uninitialized when used here
[-Wuninitialized]
, stack_pointer_(alloc_.allocate(stack_size_))
^
1 warning generated.
```
I've checked code at [line 188 of context_generic_context.hpp](https://github.com/STEllAR-GROUP/hpx/blob/master/hpx/util/coroutine/detail/context_generic_context.hpp#L188) and, even though it's not going to produce any effect (since the `allocate` member function doesn't use any state of its class for my build) it does look like a case of pre-c++11 rules where POD members being default-initialized (not present in the initializer list) are not initialized at all.
I got dozens of such warnings while building HPX (I'm on OS X using clang trunk), all of them pointing the same line, I didn't care but then I noticed it would show up anytime when including HPX headers.
I'm on 0.9.9, but I guess the issue still applies on master.
|
comp
|
warning on uninitialized member clang trunk is giving the following warning for any code that i try using hpx opt local include hpx util coroutine detail context generic context hpp warning field alloc is uninitialized when used here stack pointer alloc allocate stack size warning generated i ve checked code at and even though it s not going to produce any effect since the allocate member function doesn t use any state of its class for my build it does look like a case of pre c rules where pod members being default initialized not present in the initializer list are not initialized at all i got dozens of such warnings while building hpx i m on os x using clang trunk all of them pointing the same line i didn t care but then i noticed it would show up anytime when including hpx headers i m on but i guess the issue still applies on master
| 1
|
12,285
| 14,524,480,631
|
IssuesEvent
|
2020-12-14 11:30:20
|
umijs/umi
|
https://api.github.com/repos/umijs/umi
|
closed
|
Safari Browser SyntaxError: Invalid regular expression: invalid group specifier name
|
Browser Compatibility
|
## What happens?
A clear and concise description of what the bug is.
antd-pro 项目运行报错,谷歌正常,safari,firefox 报错
"umi": "^2.8.7",

|
True
|
Safari Browser SyntaxError: Invalid regular expression: invalid group specifier name - ## What happens?
A clear and concise description of what the bug is.
antd-pro 项目运行报错,谷歌正常,safari,firefox 报错
"umi": "^2.8.7",

|
comp
|
safari browser syntaxerror invalid regular expression invalid group specifier name what happens a clear and concise description of what the bug is antd pro 项目运行报错,谷歌正常,safari firefox 报错 umi
| 1
|
84,059
| 3,653,014,642
|
IssuesEvent
|
2016-02-17 04:16:13
|
gophish/gophish
|
https://api.github.com/repos/gophish/gophish
|
opened
|
Add Ability to Clone Campaign
|
enhancement med-priority
|
We have the ability to clone landing pages and templates, but we also need the ability to clone campaigns.
|
1.0
|
Add Ability to Clone Campaign - We have the ability to clone landing pages and templates, but we also need the ability to clone campaigns.
|
non_comp
|
add ability to clone campaign we have the ability to clone landing pages and templates but we also need the ability to clone campaigns
| 0
|
73,089
| 15,252,472,803
|
IssuesEvent
|
2021-02-20 03:04:12
|
AlexRogalskiy/github-action-charts
|
https://api.github.com/repos/AlexRogalskiy/github-action-charts
|
opened
|
CVE-2012-6708 (Medium) detected in jquery-1.8.1.min.js
|
security vulnerability
|
## CVE-2012-6708 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.8.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.1/jquery.min.js</a></p>
<p>Path to dependency file: github-action-charts/node_modules/redeyed/examples/browser/index.html</p>
<p>Path to vulnerable library: github-action-charts/node_modules/redeyed/examples/browser/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.8.1.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/AlexRogalskiy/github-action-charts/commit/619ff53b633061494c3aee072a303e00ad3a14f0">619ff53b633061494c3aee072a303e00ad3a14f0</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jQuery before 1.9.0 is vulnerable to Cross-site Scripting (XSS) attacks. The jQuery(strInput) function does not differentiate selectors from HTML in a reliable fashion. In vulnerable versions, jQuery determined whether the input was HTML by looking for the '<' character anywhere in the string, giving attackers more flexibility when attempting to construct a malicious payload. In fixed versions, jQuery only deems the input to be HTML if it explicitly starts with the '<' character, limiting exploitability only to attackers who can control the beginning of a string, which is far less common.
<p>Publish Date: 2018-01-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2012-6708>CVE-2012-6708</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2012-6708">https://nvd.nist.gov/vuln/detail/CVE-2012-6708</a></p>
<p>Release Date: 2018-01-18</p>
<p>Fix Resolution: jQuery - v1.9.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2012-6708 (Medium) detected in jquery-1.8.1.min.js - ## CVE-2012-6708 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.8.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.1/jquery.min.js</a></p>
<p>Path to dependency file: github-action-charts/node_modules/redeyed/examples/browser/index.html</p>
<p>Path to vulnerable library: github-action-charts/node_modules/redeyed/examples/browser/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.8.1.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/AlexRogalskiy/github-action-charts/commit/619ff53b633061494c3aee072a303e00ad3a14f0">619ff53b633061494c3aee072a303e00ad3a14f0</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jQuery before 1.9.0 is vulnerable to Cross-site Scripting (XSS) attacks. The jQuery(strInput) function does not differentiate selectors from HTML in a reliable fashion. In vulnerable versions, jQuery determined whether the input was HTML by looking for the '<' character anywhere in the string, giving attackers more flexibility when attempting to construct a malicious payload. In fixed versions, jQuery only deems the input to be HTML if it explicitly starts with the '<' character, limiting exploitability only to attackers who can control the beginning of a string, which is far less common.
<p>Publish Date: 2018-01-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2012-6708>CVE-2012-6708</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2012-6708">https://nvd.nist.gov/vuln/detail/CVE-2012-6708</a></p>
<p>Release Date: 2018-01-18</p>
<p>Fix Resolution: jQuery - v1.9.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_comp
|
cve medium detected in jquery min js cve medium severity vulnerability vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file github action charts node modules redeyed examples browser index html path to vulnerable library github action charts node modules redeyed examples browser index html dependency hierarchy x jquery min js vulnerable library found in head commit a href found in base branch master vulnerability details jquery before is vulnerable to cross site scripting xss attacks the jquery strinput function does not differentiate selectors from html in a reliable fashion in vulnerable versions jquery determined whether the input was html by looking for the character anywhere in the string giving attackers more flexibility when attempting to construct a malicious payload in fixed versions jquery only deems the input to be html if it explicitly starts with the character limiting exploitability only to attackers who can control the beginning of a string which is far less common publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery step up your open source security game with whitesource
| 0
|
6,547
| 8,816,541,903
|
IssuesEvent
|
2018-12-30 12:15:43
|
tanersener/mobile-ffmpeg
|
https://api.github.com/repos/tanersener/mobile-ffmpeg
|
closed
|
[IOS] Drop universal dynamic/shared libraries support
|
compatibility
|
According to [Apple Technical Note TN2435](https://developer.apple.com/library/archive/technotes/tn2435/_index.html#//apple_ref/doc/uid/DTS40017543-CH1-TROUBLESHOOTING), `Dynamic libraries outside of a framework bundle, which typically have the file extension .dylib, are not supported on iOS, watchOS, or tvOS, except for the system Swift libraries provided by Xcode`.
This note explains errors received when applications using shared universal libraries are uploaded to `AppStore`. Support for universal shared libraries should be dropped.
|
True
|
[IOS] Drop universal dynamic/shared libraries support - According to [Apple Technical Note TN2435](https://developer.apple.com/library/archive/technotes/tn2435/_index.html#//apple_ref/doc/uid/DTS40017543-CH1-TROUBLESHOOTING), `Dynamic libraries outside of a framework bundle, which typically have the file extension .dylib, are not supported on iOS, watchOS, or tvOS, except for the system Swift libraries provided by Xcode`.
This note explains errors received when applications using shared universal libraries are uploaded to `AppStore`. Support for universal shared libraries should be dropped.
|
comp
|
drop universal dynamic shared libraries support according to dynamic libraries outside of a framework bundle which typically have the file extension dylib are not supported on ios watchos or tvos except for the system swift libraries provided by xcode this note explains errors received when applications using shared universal libraries are uploaded to appstore support for universal shared libraries should be dropped
| 1
|
450,090
| 12,980,288,343
|
IssuesEvent
|
2020-07-22 04:48:45
|
StatCan/kubeflow-containers
|
https://api.github.com/repos/StatCan/kubeflow-containers
|
opened
|
remote-desktop: Address failed checksum on Netdata
|
component/remote-desktop priority/important-soon size/S
|
I saw that the latest [automatic build failed due to a failed checksum for Netdata](https://github.com/StatCan/kubeflow-containers-desktop/runs/895985048?check_suite_focus=true). There have been [important recent security updates to Netdata](https://github.com/netdata/netdata/releases) and the simplest solution might be updating the [checksum](https://github.com/StatCan/kubeflow-containers-desktop/blob/master/base/resources/tools/netdata.sh#L6) corresponding to `wget https://my-netdata.io/kickstart.sh`. Still, this should be tested, and perhaps a more version-specific approach would be better.
|
1.0
|
remote-desktop: Address failed checksum on Netdata - I saw that the latest [automatic build failed due to a failed checksum for Netdata](https://github.com/StatCan/kubeflow-containers-desktop/runs/895985048?check_suite_focus=true). There have been [important recent security updates to Netdata](https://github.com/netdata/netdata/releases) and the simplest solution might be updating the [checksum](https://github.com/StatCan/kubeflow-containers-desktop/blob/master/base/resources/tools/netdata.sh#L6) corresponding to `wget https://my-netdata.io/kickstart.sh`. Still, this should be tested, and perhaps a more version-specific approach would be better.
|
non_comp
|
remote desktop address failed checksum on netdata i saw that the latest there have been and the simplest solution might be updating the corresponding to wget still this should be tested and perhaps a more version specific approach would be better
| 0
|
213,403
| 16,522,211,499
|
IssuesEvent
|
2021-05-26 15:38:51
|
yretenai/yordle
|
https://api.github.com/repos/yretenai/yordle
|
opened
|
Test Cases & Mock Data
|
documentation
|
- [ ] Manually Created or Generated Test Data
- [ ] Test Cases to ensure library consistency
|
1.0
|
Test Cases & Mock Data - - [ ] Manually Created or Generated Test Data
- [ ] Test Cases to ensure library consistency
|
non_comp
|
test cases mock data manually created or generated test data test cases to ensure library consistency
| 0
|
73,760
| 24,789,640,830
|
IssuesEvent
|
2022-10-24 12:49:45
|
hazelcast/hazelcast
|
https://api.github.com/repos/hazelcast/hazelcast
|
closed
|
Metadata information such as TTL and maxIdle time are not replicated via WAN replication
|
Type: Defect Team: Core Source: Internal Module: WAN
|
**Describe the bug**
Metadata information such as **TTL** and **maxIdle** time are not replicated via WAN replication
**Expected behavior**
As a user when I set TTL or maxIdle time of an entry in a WAN replicated map I expect to see those values in a target cluster.
**To Reproduce**
ClusterA config
```
<map name="metadataTestMap">
<max-idle-seconds>30</max-idle-seconds>
<time-to-live-seconds>60</time-to-live-seconds>
<wan-replication-ref name="my-wan-replication" />
</map
```
ClusterB config
```
<map name="metadataTestMap">
</map
```
Below code runs on ClusterA
```
metadataTestMap.put(1, 1);
metadataTestMap.put(2, 2);
metadataTestMap.put(3, 3, 120, SECONDS, 90, SECONDS);
metadataTestMap.put(4, 4, 150, SECONDS, 150, SECONDS);
```
after 30 seconds the result from both maps;
```
ClusterA [4, 3]
ClusterB [2, 1, 4, 3]
```
after 150 seconds the result from both maps;
```
ClusterA []
ClusterB [2, 1, 4, 3]
**Additional context**
JDK 11, Hazelcast Platform 5.1
Test config for Cluster A;
[hazelcast.xml.txt](https://github.com/hazelcast/hazelcast/files/8595352/hazelcast.xml.txt)
|
1.0
|
Metadata information such as TTL and maxIdle time are not replicated via WAN replication - **Describe the bug**
Metadata information such as **TTL** and **maxIdle** time are not replicated via WAN replication
**Expected behavior**
As a user when I set TTL or maxIdle time of an entry in a WAN replicated map I expect to see those values in a target cluster.
**To Reproduce**
ClusterA config
```
<map name="metadataTestMap">
<max-idle-seconds>30</max-idle-seconds>
<time-to-live-seconds>60</time-to-live-seconds>
<wan-replication-ref name="my-wan-replication" />
</map
```
ClusterB config
```
<map name="metadataTestMap">
</map
```
Below code runs on ClusterA
```
metadataTestMap.put(1, 1);
metadataTestMap.put(2, 2);
metadataTestMap.put(3, 3, 120, SECONDS, 90, SECONDS);
metadataTestMap.put(4, 4, 150, SECONDS, 150, SECONDS);
```
after 30 seconds the result from both maps;
```
ClusterA [4, 3]
ClusterB [2, 1, 4, 3]
```
after 150 seconds the result from both maps;
```
ClusterA []
ClusterB [2, 1, 4, 3]
**Additional context**
JDK 11, Hazelcast Platform 5.1
Test config for Cluster A;
[hazelcast.xml.txt](https://github.com/hazelcast/hazelcast/files/8595352/hazelcast.xml.txt)
|
non_comp
|
metadata information such as ttl and maxidle time are not replicated via wan replication describe the bug metadata information such as ttl and maxidle time are not replicated via wan replication expected behavior as a user when i set ttl or maxidle time of an entry in a wan replicated map i expect to see those values in a target cluster to reproduce clustera config map clusterb config map below code runs on clustera metadatatestmap put metadatatestmap put metadatatestmap put seconds seconds metadatatestmap put seconds seconds after seconds the result from both maps clustera clusterb after seconds the result from both maps clustera clusterb additional context jdk hazelcast platform test config for cluster a
| 0
|
38,648
| 2,849,647,917
|
IssuesEvent
|
2015-05-30 21:41:51
|
chrisblakley/WP-Nebula
|
https://api.github.com/repos/chrisblakley/WP-Nebula
|
opened
|
Compatibility Mode Detection with Edge browser (vs. IE11)
|
Frontend (Style) Low Priority
|
Come up with a polyfill feature to detect Edge vs IE11.
Also, find a polyfill to detect IE11 vs. IE10 besides pointerevents that doesn't require a custom build of modernizr.

|
1.0
|
Compatibility Mode Detection with Edge browser (vs. IE11) - Come up with a polyfill feature to detect Edge vs IE11.
Also, find a polyfill to detect IE11 vs. IE10 besides pointerevents that doesn't require a custom build of modernizr.

|
non_comp
|
compatibility mode detection with edge browser vs come up with a polyfill feature to detect edge vs also find a polyfill to detect vs besides pointerevents that doesn t require a custom build of modernizr
| 0
|
76,386
| 26,402,963,487
|
IssuesEvent
|
2023-01-13 04:06:29
|
dkfans/keeperfx
|
https://api.github.com/repos/dkfans/keeperfx
|
opened
|
Room names on the Define Keys screen are no longer visible
|
Type-Defect Priority-High
|
Go to the Define Keys screen. None of the room names show.
|
1.0
|
Room names on the Define Keys screen are no longer visible - Go to the Define Keys screen. None of the room names show.
|
non_comp
|
room names on the define keys screen are no longer visible go to the define keys screen none of the room names show
| 0
|
19,143
| 13,547,200,639
|
IssuesEvent
|
2020-09-17 03:25:31
|
tlaplus/tlaplus
|
https://api.github.com/repos/tlaplus/tlaplus
|
closed
|
P-style PlusCal templates for content assist
|
Toolbox cantfix usability
|
While on module editor, the current template does not set up user for a start right away. the current template is:
(***************************************************************************
--algorithm AlgorithmName {
}
***************************************************************************)
A more sensible template could be:
(*--algorithm AlgorithmName
end algorithm;*)
I have a patch in my fork. Please let me know how to contribute.
|
True
|
P-style PlusCal templates for content assist - While on module editor, the current template does not set up user for a start right away. the current template is:
(***************************************************************************
--algorithm AlgorithmName {
}
***************************************************************************)
A more sensible template could be:
(*--algorithm AlgorithmName
end algorithm;*)
I have a patch in my fork. Please let me know how to contribute.
|
non_comp
|
p style pluscal templates for content assist while on module editor the current template does not set up user for a start right away the current template is algorithm algorithmname a more sensible template could be algorithm algorithmname end algorithm i have a patch in my fork please let me know how to contribute
| 0
|
15,665
| 20,203,933,390
|
IssuesEvent
|
2022-02-11 18:01:50
|
safing/portmaster
|
https://api.github.com/repos/safing/portmaster
|
closed
|
Portmaster does not work properly with Malwarebytes
|
docs pending in/compatibility
|
### What worked?
Disabling Malwarebyte's Web Protection brings Portmaster to a "Trusted" rating.
### What did not work?
If you have Malwarebyte's Web Protection on, Portmaster will detect a compatibility issue, detect the network as "Insecure," and disable network traffic on the device.
### Additional information
Having the AV on would be beneficial, even if it'll just be a backup method.
**Debug-Info**: https://support.safing.io/privatebin/?5fa3d9f7e421e3be#CkgCWtNHQxrvAoDa1xyERqVhftxFCRokBAkyFBFGvtkv
|
True
|
Portmaster does not work properly with Malwarebytes - ### What worked?
Disabling Malwarebyte's Web Protection brings Portmaster to a "Trusted" rating.
### What did not work?
If you have Malwarebyte's Web Protection on, Portmaster will detect a compatibility issue, detect the network as "Insecure," and disable network traffic on the device.
### Additional information
Having the AV on would be beneficial, even if it'll just be a backup method.
**Debug-Info**: https://support.safing.io/privatebin/?5fa3d9f7e421e3be#CkgCWtNHQxrvAoDa1xyERqVhftxFCRokBAkyFBFGvtkv
|
comp
|
portmaster does not work properly with malwarebytes what worked disabling malwarebyte s web protection brings portmaster to a trusted rating what did not work if you have malwarebyte s web protection on portmaster will detect a compatibility issue detect the network as insecure and disable network traffic on the device additional information having the av on would be beneficial even if it ll just be a backup method debug info
| 1
|
45,535
| 11,694,269,843
|
IssuesEvent
|
2020-03-06 03:25:59
|
GoogleCloudPlatform/java-docs-samples
|
https://api.github.com/repos/GoogleCloudPlatform/java-docs-samples
|
closed
|
dlp.snippets.InspectTests: testInspectBigQueryTable failed
|
buildcop: issue priority: p1 type: bug
|
buildID: 2a7d155eb2dfb8e1ebd3c7f5a1047e0f772aba72
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/fd431b58-df8f-44ba-9415-77a1d653f3eb), [Sponge](http://sponge2/fd431b58-df8f-44ba-9415-77a1d653f3eb)
status: failed
|
1.0
|
dlp.snippets.InspectTests: testInspectBigQueryTable failed - buildID: 2a7d155eb2dfb8e1ebd3c7f5a1047e0f772aba72
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/fd431b58-df8f-44ba-9415-77a1d653f3eb), [Sponge](http://sponge2/fd431b58-df8f-44ba-9415-77a1d653f3eb)
status: failed
|
non_comp
|
dlp snippets inspecttests testinspectbigquerytable failed buildid buildurl status failed
| 0
|
422,758
| 28,479,608,049
|
IssuesEvent
|
2023-04-18 00:44:38
|
nexodus-io/nexodus
|
https://api.github.com/repos/nexodus-io/nexodus
|
closed
|
SaaS Preview Documentation
|
documentation enhancement
|
### Describe the Problem Statement
Launching the SaaS preview deployment needs documentation that is focused on exactly what users need to know to get going with this instance. It should cover creating an account, enrolling devices, and suggestions on how to test connectivity.
We have a lot of general docs that are relevant, but I envision a quickstart guide focused on this prod environment. There are different options for where docs can go:
- quickstart or other general docs in the existing docs/ directory
- the quickstart docs that are on the admin UI dashboard. We could consider allowing this page to be customized per deployment.
- https://nexodus.io/try.html ... though if we start adding more detail, it would be easier to move that page over to https://docs.nexodus.io
### Describe the Enhancement
_No response_
### Alternate Solutions
_No response_
### Additional context
_No response_
|
1.0
|
SaaS Preview Documentation - ### Describe the Problem Statement
Launching the SaaS preview deployment needs documentation that is focused on exactly what users need to know to get going with this instance. It should cover creating an account, enrolling devices, and suggestions on how to test connectivity.
We have a lot of general docs that are relevant, but I envision a quickstart guide focused on this prod environment. There are different options for where docs can go:
- quickstart or other general docs in the existing docs/ directory
- the quickstart docs that are on the admin UI dashboard. We could consider allowing this page to be customized per deployment.
- https://nexodus.io/try.html ... though if we start adding more detail, it would be easier to move that page over to https://docs.nexodus.io
### Describe the Enhancement
_No response_
### Alternate Solutions
_No response_
### Additional context
_No response_
|
non_comp
|
saas preview documentation describe the problem statement launching the saas preview deployment needs documentation that is focused on exactly what users need to know to get going with this instance it should cover creating an account enrolling devices and suggestions on how to test connectivity we have a lot of general docs that are relevant but i envision a quickstart guide focused on this prod environment there are different options for where docs can go quickstart or other general docs in the existing docs directory the quickstart docs that are on the admin ui dashboard we could consider allowing this page to be customized per deployment though if we start adding more detail it would be easier to move that page over to describe the enhancement no response alternate solutions no response additional context no response
| 0
|
35,388
| 7,947,374,013
|
IssuesEvent
|
2018-07-11 02:23:30
|
stan-dev/stan
|
https://api.github.com/repos/stan-dev/stan
|
closed
|
convert manual to bookdown format
|
code cleanup documentation feature
|
#### Summary:
Subject says it all. Will probably also break it into a few smaller documents.
#### Description:
Bookdown will let us combine markdown, R, and LaTeX to generate a manual in HTML and LaTeX form.
The LaTeX version won't look as nice as the current one. But it'll be even easier for users to provide changes to.
#### Details:
The current <i>Stan Modeling Language Users Guide and Reference Manual</i> is coded using LaTeX and distributed as a pdf.
The new documentation will be coded using Bookdown and distributed as both pdf and hosted on the mc-stan.org web site. The current reference manual will be broken down into the following pieces:
### Language Specification
```
(Modeling Language)
2. Encodings, Includes, and Comments
3. Data Types and Variable Declarations
4. Expressions
5. Statements
6. Program Blocks
7. User-Defined Functions
8. Execution of a Stan Program
35. Transformations of Constrained Variables
C. Modeling Language Syntax
D. Warning and Error Messages (language-related, only)
E. Deprecated Features
```
### Function Library
```
(Built-In Functions)
(Discrete Distributions)
(Continuous Distributions)
E. Deprecated Features (lib functions only)
F. Mathematical Functions
```
### Programming Guide
```
1. Overview
(Example Models)
9. Regression Models
10. Time-Series Models
11. Missing Data & Partially Known Parameters
12. Truncated or Censored Data
13. Finite Mixtures
14. Measurement Error and Meta-Analysis
15. Latent Discrete Parameters
16. Sparse and Ragged Data Structures
17. Clustering Models
18. Gaussian Processes 246
19. Directions, Rotations, and Hyperspheres
20. Solving Algebraic Equations
21. Solving Differential Equations
(Programming Techniques)
22. Reparameterization & Change of Variables 285
23. Custom Probability Functions 296
24. User-Defined Functions 298
25. Problematic Posteriors 309
26. Matrices, Vectors, and Arrays 323
27. Multiple Indexing and Range Indexing 329
28. Optimizing Stan Code for Efficiency
65. Model Building as Software Development
69. Stan Program Style Guide
B. Stan for Users of BUGS
```
### Algorithm Guide
```
(Inference)
29. Bayesian Data Analysis
30. Markov Chain Monte Carlo Sampling
31. Penalized Maximum Likelihood Point Estimation
32. Bayesian Point Estimation
33. Variational Inference
(Algorithms & Implementations)
34. Hamiltonian Monte Carlo Sampling
35. Transformations of Constrained Variables
36. Optimization Algorithms
37. Variational Inference
38. Diagnostic Mode
D. Warning and Error Messages (for algorithms only)
```
### Relocate to Wiki
```
66. Software Development Lifecycle
67. Reproducibility
```
### Remove
```
Preface
Acknowledgements
68. Contributed Modules
A. Licensing
```
#### Current Version:
v2.17.0
|
1.0
|
convert manual to bookdown format - #### Summary:
Subject says it all. Will probably also break it into a few smaller documents.
#### Description:
Bookdown will let us combine markdown, R, and LaTeX to generate a manual in HTML and LaTeX form.
The LaTeX version won't look as nice as the current one. But it'll be even easier for users to provide changes to.
#### Details:
The current <i>Stan Modeling Language Users Guide and Reference Manual</i> is coded using LaTeX and distributed as a pdf.
The new documentation will be coded using Bookdown and distributed as both pdf and hosted on the mc-stan.org web site. The current reference manual will be broken down into the following pieces:
### Language Specification
```
(Modeling Language)
2. Encodings, Includes, and Comments
3. Data Types and Variable Declarations
4. Expressions
5. Statements
6. Program Blocks
7. User-Defined Functions
8. Execution of a Stan Program
35. Transformations of Constrained Variables
C. Modeling Language Syntax
D. Warning and Error Messages (language-related, only)
E. Deprecated Features
```
### Function Library
```
(Built-In Functions)
(Discrete Distributions)
(Continuous Distributions)
E. Deprecated Features (lib functions only)
F. Mathematical Functions
```
### Programming Guide
```
1. Overview
(Example Models)
9. Regression Models
10. Time-Series Models
11. Missing Data & Partially Known Parameters
12. Truncated or Censored Data
13. Finite Mixtures
14. Measurement Error and Meta-Analysis
15. Latent Discrete Parameters
16. Sparse and Ragged Data Structures
17. Clustering Models
18. Gaussian Processes 246
19. Directions, Rotations, and Hyperspheres
20. Solving Algebraic Equations
21. Solving Differential Equations
(Programming Techniques)
22. Reparameterization & Change of Variables 285
23. Custom Probability Functions 296
24. User-Defined Functions 298
25. Problematic Posteriors 309
26. Matrices, Vectors, and Arrays 323
27. Multiple Indexing and Range Indexing 329
28. Optimizing Stan Code for Efficiency
65. Model Building as Software Development
69. Stan Program Style Guide
B. Stan for Users of BUGS
```
### Algorithm Guide
```
(Inference)
29. Bayesian Data Analysis
30. Markov Chain Monte Carlo Sampling
31. Penalized Maximum Likelihood Point Estimation
32. Bayesian Point Estimation
33. Variational Inference
(Algorithms & Implementations)
34. Hamiltonian Monte Carlo Sampling
35. Transformations of Constrained Variables
36. Optimization Algorithms
37. Variational Inference
38. Diagnostic Mode
D. Warning and Error Messages (for algorithms only)
```
### Relocate to Wiki
```
66. Software Development Lifecycle
67. Reproducibility
```
### Remove
```
Preface
Acknowledgements
68. Contributed Modules
A. Licensing
```
#### Current Version:
v2.17.0
|
non_comp
|
convert manual to bookdown format summary subject says it all will probably also break it into a few smaller documents description bookdown will let us combine markdown r and latex to generate a manual in html and latex form the latex version won t look as nice as the current one but it ll be even easier for users to provide changes to details the current stan modeling language users guide and reference manual is coded using latex and distributed as a pdf the new documentation will be coded using bookdown and distributed as both pdf and hosted on the mc stan org web site the current reference manual will be broken down into the following pieces language specification modeling language encodings includes and comments data types and variable declarations expressions statements program blocks user defined functions execution of a stan program transformations of constrained variables c modeling language syntax d warning and error messages language related only e deprecated features function library built in functions discrete distributions continuous distributions e deprecated features lib functions only f mathematical functions programming guide overview example models regression models time series models missing data partially known parameters truncated or censored data finite mixtures measurement error and meta analysis latent discrete parameters sparse and ragged data structures clustering models gaussian processes directions rotations and hyperspheres solving algebraic equations solving differential equations programming techniques reparameterization change of variables custom probability functions user defined functions problematic posteriors matrices vectors and arrays multiple indexing and range indexing optimizing stan code for efficiency model building as software development stan program style guide b stan for users of bugs algorithm guide inference bayesian data analysis markov chain monte carlo sampling penalized maximum likelihood point estimation bayesian point estimation variational inference algorithms implementations hamiltonian monte carlo sampling transformations of constrained variables optimization algorithms variational inference diagnostic mode d warning and error messages for algorithms only relocate to wiki software development lifecycle reproducibility remove preface acknowledgements contributed modules a licensing current version
| 0
|
25,742
| 11,209,247,844
|
IssuesEvent
|
2020-01-06 10:02:33
|
ChenLuigi/TestingPOM
|
https://api.github.com/repos/ChenLuigi/TestingPOM
|
opened
|
CVE-2019-17563 (Medium) detected in tomcat-catalina-7.0.42.jar
|
security vulnerability
|
## CVE-2019-17563 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tomcat-catalina-7.0.42.jar</b></p></summary>
<p>Tomcat Servlet Engine Core Classes and Standard implementations</p>
<p>Path to dependency file: /tmp/ws-scm/TestingPOM/pom.xml</p>
<p>Path to vulnerable library: downloadResource_a31cc5ed-f476-43dd-bb84-264eea4eb35e/20200106100154/tomcat-catalina-7.0.42.jar</p>
<p>
Dependency Hierarchy:
- :x: **tomcat-catalina-7.0.42.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ChenLuigi/TestingPOM/commit/0e898c390632d59745b4b2e312e1ba213c095c21">0e898c390632d59745b4b2e312e1ba213c095c21</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
When using FORM authentication with Apache Tomcat 9.0.0.M1 to 9.0.29, 8.5.0 to 8.5.49 and 7.0.0 to 7.0.98 there was a narrow window where an attacker could perform a session fixation attack. The window was considered too narrow for an exploit to be practical but, erring on the side of caution, this issue has been treated as a security vulnerability.
<p>Publish Date: 2019-12-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-17563>CVE-2019-17563</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17563">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17563</a></p>
<p>Release Date: 2019-12-23</p>
<p>Fix Resolution: org.apache.tomcat:tomcat-catalina:1.0.99,org.apache.tomcat:tomcat-catalina:8.5.50,org.apache.tomcat:tomcat-catalina:9.0.30</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2019-17563 (Medium) detected in tomcat-catalina-7.0.42.jar - ## CVE-2019-17563 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tomcat-catalina-7.0.42.jar</b></p></summary>
<p>Tomcat Servlet Engine Core Classes and Standard implementations</p>
<p>Path to dependency file: /tmp/ws-scm/TestingPOM/pom.xml</p>
<p>Path to vulnerable library: downloadResource_a31cc5ed-f476-43dd-bb84-264eea4eb35e/20200106100154/tomcat-catalina-7.0.42.jar</p>
<p>
Dependency Hierarchy:
- :x: **tomcat-catalina-7.0.42.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ChenLuigi/TestingPOM/commit/0e898c390632d59745b4b2e312e1ba213c095c21">0e898c390632d59745b4b2e312e1ba213c095c21</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
When using FORM authentication with Apache Tomcat 9.0.0.M1 to 9.0.29, 8.5.0 to 8.5.49 and 7.0.0 to 7.0.98 there was a narrow window where an attacker could perform a session fixation attack. The window was considered too narrow for an exploit to be practical but, erring on the side of caution, this issue has been treated as a security vulnerability.
<p>Publish Date: 2019-12-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-17563>CVE-2019-17563</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17563">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17563</a></p>
<p>Release Date: 2019-12-23</p>
<p>Fix Resolution: org.apache.tomcat:tomcat-catalina:1.0.99,org.apache.tomcat:tomcat-catalina:8.5.50,org.apache.tomcat:tomcat-catalina:9.0.30</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_comp
|
cve medium detected in tomcat catalina jar cve medium severity vulnerability vulnerable library tomcat catalina jar tomcat servlet engine core classes and standard implementations path to dependency file tmp ws scm testingpom pom xml path to vulnerable library downloadresource tomcat catalina jar dependency hierarchy x tomcat catalina jar vulnerable library found in head commit a href vulnerability details when using form authentication with apache tomcat to to and to there was a narrow window where an attacker could perform a session fixation attack the window was considered too narrow for an exploit to be practical but erring on the side of caution this issue has been treated as a security vulnerability publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution org apache tomcat tomcat catalina org apache tomcat tomcat catalina org apache tomcat tomcat catalina step up your open source security game with whitesource
| 0
|
4,934
| 7,546,288,667
|
IssuesEvent
|
2018-04-18 02:04:22
|
rootsongjc/kubernetes-vagrant-centos-cluster
|
https://api.github.com/repos/rootsongjc/kubernetes-vagrant-centos-cluster
|
closed
|
Dashboard cannot access
|
compatibility
|
和这个 https://github.com/rootsongjc/kubernetes-vagrant-centos-cluster/issues/6 有点类似,但是稍微有区别:Dashboard 的 pod 根本起不来
```
[root@node1 hack]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6558b6549d-zs9dt 1/1 Running 0 32m
kube-system kubernetes-dashboard-f95796d57-g6m9g 0/1 CrashLoopBackOff 11 32m
kube-system traefik-ingress-controller-kzqvt 1/1 Running 0 32m
[root@node1 hack]# kubectl get svc --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.254.0.1 <none> 443/TCP 4d
kube-system kube-dns ClusterIP 10.254.0.2 <none> 53/UDP,53/TCP 34m
kube-system kubernetes-dashboard ClusterIP 10.254.113.186 <none> 8443/TCP 34m
kube-system traefik-ingress-service ClusterIP 10.254.167.251 <none> 80/TCP,8080/TCP 34m
[root@node1 hack]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
docker.io/jimmysong/kubernetes-dashboard-amd64 v1.8.2 c87ea0497294 3 months ago 102 MB
docker.io/jimmysong/pause-amd64 3.0 99e59f495ffa 23 months ago 747 kB
[root@node1 hack]# docker run -it docker.io/jimmysong/kubernetes-dashboard-amd64:v1.8.2 bash
2018/04/16 17:13:43 Starting overwatch
2018/04/16 17:13:43 Using in-cluster config to connect to apiserver
2018/04/16 17:13:43 Could not init in cluster config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
2018/04/16 17:13:43 Using random key for csrf signing
2018/04/16 17:13:43 No request provided. Skipping authorization
panic: Could not create client config. Check logs for more information
goroutine 1 [running]:
github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).initInsecureClient(0xc4201632c0)
/home/travis/build/kubernetes/dashboard/.tmp/backend/src/github.com/kubernetes/dashboard/src/app/backend/client/manager.go:335 +0x9a
github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).init(0xc4201632c0)
/home/travis/build/kubernetes/dashboard/.tmp/backend/src/github.com/kubernetes/dashboard/src/app/backend/client/manager.go:297 +0x47
github.com/kubernetes/dashboard/src/app/backend/client.NewClientManager(0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
/home/travis/build/kubernetes/dashboard/.tmp/backend/src/github.com/kubernetes/dashboard/src/app/backend/client/manager.go:365 +0x84
main.main()
/home/travis/build/kubernetes/dashboard/.tmp/backend/src/github.com/kubernetes/dashboard/src/app/backend/dashboard.go:91 +0x13b
```
我通过查看 docker containers 发现dashboard 的容器没有起来,尝试手动启 dashboard 的容器,结果报上面👆这个错误。
```
[root@node1 hack]# systemctl status kube-apiserver -a
● kube-apiserver.service - Kubernetes API Service
Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2018-04-16 23:06:31 CST; 2h 13min ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Main PID: 1217 (kube-apiserver)
Memory: 2.6M
CGroup: /system.slice/kube-apiserver.service
└─1217 /usr/bin/kube-apiserver --logtostderr=true --v=0 --etcd-servers=http://172.17.8.101:2379,http://172.17.8.102:2379,http://172.17.8.103:2379 --advertise-addres...
Apr 17 01:01:43 node1 kube-apiserver[1217]: I0417 01:01:43.675007 1217 logs.go:49] http: TLS handshake error from 172.17.8.1:62897: EOF
Apr 17 01:01:43 node1 kube-apiserver[1217]: I0417 01:01:43.807961 1217 logs.go:49] http2: server: error reading preface from client 172.17.8.1:62898: read tcp 172.17.8.101:6443->172.17.8.1:62898: read: connection reset by peer
Apr 17 01:02:22 node1 kube-apiserver[1217]: I0417 01:02:22.100809 1217 logs.go:49] http: TLS handshake error from 172.17.8.101:56790: remote error: tls: unknown certificate authority
Apr 17 01:05:23 node1 kube-apiserver[1217]: I0417 01:05:23.383208 1217 trace.go:76] Trace[932644373]: "List /api/v1/componentstatuses" (started: 2018-04-17 01:05:20.376024136 +0800 CST m=+7136.859394663) (total time: 3.007151421s):
Apr 17 01:05:23 node1 kube-apiserver[1217]: Trace[932644373]: [3.007009003s] [3.006957743s] Listing from storage done
Apr 17 01:08:11 node1 kube-apiserver[1217]: I0417 01:08:11.716342 1217 logs.go:49] http: TLS handshake error from 172.17.8.1:62985: EOF
Apr 17 01:08:51 node1 kube-apiserver[1217]: I0417 01:08:51.101732 1217 logs.go:49] http2: received GOAWAY [FrameHeader GOAWAY len=33], starting graceful shutdown
Apr 17 01:16:56 node1 kube-apiserver[1217]: I0417 01:16:56.548936 1217 logs.go:49] http: TLS handshake error from 172.17.8.1:63062: EOF
Apr 17 01:16:58 node1 kube-apiserver[1217]: I0417 01:16:58.937833 1217 logs.go:49] http: TLS handshake error from 172.17.8.1:63063: EOF
Apr 17 01:16:59 node1 kube-apiserver[1217]: I0417 01:16:59.109004 1217 logs.go:49] http2: server: error reading preface from client 172.17.8.1:63064: read tcp 172.17.8.101:6443->172.17.8.1:63064: read: connection reset by peer
```
|
True
|
Dashboard cannot access - 和这个 https://github.com/rootsongjc/kubernetes-vagrant-centos-cluster/issues/6 有点类似,但是稍微有区别:Dashboard 的 pod 根本起不来
```
[root@node1 hack]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6558b6549d-zs9dt 1/1 Running 0 32m
kube-system kubernetes-dashboard-f95796d57-g6m9g 0/1 CrashLoopBackOff 11 32m
kube-system traefik-ingress-controller-kzqvt 1/1 Running 0 32m
[root@node1 hack]# kubectl get svc --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.254.0.1 <none> 443/TCP 4d
kube-system kube-dns ClusterIP 10.254.0.2 <none> 53/UDP,53/TCP 34m
kube-system kubernetes-dashboard ClusterIP 10.254.113.186 <none> 8443/TCP 34m
kube-system traefik-ingress-service ClusterIP 10.254.167.251 <none> 80/TCP,8080/TCP 34m
[root@node1 hack]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
docker.io/jimmysong/kubernetes-dashboard-amd64 v1.8.2 c87ea0497294 3 months ago 102 MB
docker.io/jimmysong/pause-amd64 3.0 99e59f495ffa 23 months ago 747 kB
[root@node1 hack]# docker run -it docker.io/jimmysong/kubernetes-dashboard-amd64:v1.8.2 bash
2018/04/16 17:13:43 Starting overwatch
2018/04/16 17:13:43 Using in-cluster config to connect to apiserver
2018/04/16 17:13:43 Could not init in cluster config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
2018/04/16 17:13:43 Using random key for csrf signing
2018/04/16 17:13:43 No request provided. Skipping authorization
panic: Could not create client config. Check logs for more information
goroutine 1 [running]:
github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).initInsecureClient(0xc4201632c0)
/home/travis/build/kubernetes/dashboard/.tmp/backend/src/github.com/kubernetes/dashboard/src/app/backend/client/manager.go:335 +0x9a
github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).init(0xc4201632c0)
/home/travis/build/kubernetes/dashboard/.tmp/backend/src/github.com/kubernetes/dashboard/src/app/backend/client/manager.go:297 +0x47
github.com/kubernetes/dashboard/src/app/backend/client.NewClientManager(0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
/home/travis/build/kubernetes/dashboard/.tmp/backend/src/github.com/kubernetes/dashboard/src/app/backend/client/manager.go:365 +0x84
main.main()
/home/travis/build/kubernetes/dashboard/.tmp/backend/src/github.com/kubernetes/dashboard/src/app/backend/dashboard.go:91 +0x13b
```
我通过查看 docker containers 发现dashboard 的容器没有起来,尝试手动启 dashboard 的容器,结果报上面👆这个错误。
```
[root@node1 hack]# systemctl status kube-apiserver -a
● kube-apiserver.service - Kubernetes API Service
Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2018-04-16 23:06:31 CST; 2h 13min ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Main PID: 1217 (kube-apiserver)
Memory: 2.6M
CGroup: /system.slice/kube-apiserver.service
└─1217 /usr/bin/kube-apiserver --logtostderr=true --v=0 --etcd-servers=http://172.17.8.101:2379,http://172.17.8.102:2379,http://172.17.8.103:2379 --advertise-addres...
Apr 17 01:01:43 node1 kube-apiserver[1217]: I0417 01:01:43.675007 1217 logs.go:49] http: TLS handshake error from 172.17.8.1:62897: EOF
Apr 17 01:01:43 node1 kube-apiserver[1217]: I0417 01:01:43.807961 1217 logs.go:49] http2: server: error reading preface from client 172.17.8.1:62898: read tcp 172.17.8.101:6443->172.17.8.1:62898: read: connection reset by peer
Apr 17 01:02:22 node1 kube-apiserver[1217]: I0417 01:02:22.100809 1217 logs.go:49] http: TLS handshake error from 172.17.8.101:56790: remote error: tls: unknown certificate authority
Apr 17 01:05:23 node1 kube-apiserver[1217]: I0417 01:05:23.383208 1217 trace.go:76] Trace[932644373]: "List /api/v1/componentstatuses" (started: 2018-04-17 01:05:20.376024136 +0800 CST m=+7136.859394663) (total time: 3.007151421s):
Apr 17 01:05:23 node1 kube-apiserver[1217]: Trace[932644373]: [3.007009003s] [3.006957743s] Listing from storage done
Apr 17 01:08:11 node1 kube-apiserver[1217]: I0417 01:08:11.716342 1217 logs.go:49] http: TLS handshake error from 172.17.8.1:62985: EOF
Apr 17 01:08:51 node1 kube-apiserver[1217]: I0417 01:08:51.101732 1217 logs.go:49] http2: received GOAWAY [FrameHeader GOAWAY len=33], starting graceful shutdown
Apr 17 01:16:56 node1 kube-apiserver[1217]: I0417 01:16:56.548936 1217 logs.go:49] http: TLS handshake error from 172.17.8.1:63062: EOF
Apr 17 01:16:58 node1 kube-apiserver[1217]: I0417 01:16:58.937833 1217 logs.go:49] http: TLS handshake error from 172.17.8.1:63063: EOF
Apr 17 01:16:59 node1 kube-apiserver[1217]: I0417 01:16:59.109004 1217 logs.go:49] http2: server: error reading preface from client 172.17.8.1:63064: read tcp 172.17.8.101:6443->172.17.8.1:63064: read: connection reset by peer
```
|
comp
|
dashboard cannot access 和这个 有点类似,但是稍微有区别:dashboard 的 pod 根本起不来 kubectl get pods all namespaces namespace name ready status restarts age kube system coredns running kube system kubernetes dashboard crashloopbackoff kube system traefik ingress controller kzqvt running kubectl get svc all namespaces namespace name type cluster ip external ip port s age default kubernetes clusterip tcp kube system kube dns clusterip udp tcp kube system kubernetes dashboard clusterip tcp kube system traefik ingress service clusterip tcp tcp docker images repository tag image id created size docker io jimmysong kubernetes dashboard months ago mb docker io jimmysong pause months ago kb docker run it docker io jimmysong kubernetes dashboard bash starting overwatch using in cluster config to connect to apiserver could not init in cluster config unable to load in cluster configuration kubernetes service host and kubernetes service port must be defined using random key for csrf signing no request provided skipping authorization panic could not create client config check logs for more information goroutine github com kubernetes dashboard src app backend client clientmanager initinsecureclient home travis build kubernetes dashboard tmp backend src github com kubernetes dashboard src app backend client manager go github com kubernetes dashboard src app backend client clientmanager init home travis build kubernetes dashboard tmp backend src github com kubernetes dashboard src app backend client manager go github com kubernetes dashboard src app backend client newclientmanager home travis build kubernetes dashboard tmp backend src github com kubernetes dashboard src app backend client manager go main main home travis build kubernetes dashboard tmp backend src github com kubernetes dashboard src app backend dashboard go 我通过查看 docker containers 发现dashboard 的容器没有起来,尝试手动启 dashboard 的容器,结果报上面👆这个错误。 systemctl status kube apiserver a ● kube apiserver service kubernetes api service loaded loaded usr lib systemd system kube apiserver service enabled vendor preset disabled active active running since mon cst ago docs main pid kube apiserver memory cgroup system slice kube apiserver service └─ usr bin kube apiserver logtostderr true v etcd servers advertise addres apr kube apiserver logs go http tls handshake error from eof apr kube apiserver logs go server error reading preface from client read tcp read connection reset by peer apr kube apiserver logs go http tls handshake error from remote error tls unknown certificate authority apr kube apiserver trace go trace list api componentstatuses started cst m total time apr kube apiserver trace listing from storage done apr kube apiserver logs go http tls handshake error from eof apr kube apiserver logs go received goaway starting graceful shutdown apr kube apiserver logs go http tls handshake error from eof apr kube apiserver logs go http tls handshake error from eof apr kube apiserver logs go server error reading preface from client read tcp read connection reset by peer
| 1
|
8,305
| 10,335,566,256
|
IssuesEvent
|
2019-09-03 10:54:13
|
AdguardTeam/AdguardForWindows
|
https://api.github.com/repos/AdguardTeam/AdguardForWindows
|
closed
|
SetupVPN compatibility issue.
|
P4: Low bug compatibility
|
### Steps to reproduce
1. Install SetupVPN extension: https://chrome.google.com/webstore/detail/setupvpn-lifetime-free-vp/oofgbpoabipfcfjapgnbbjjaenockbdp
2. Login with email adguard@vomoto.com and password adguard
### Expected behavior
The extension should find a list of servers to connect.
### Actual behavior
The extension keeps trying to find a server forever.
### Customer ID
2170400
### Your environment
Chrome Canary 77.0.3823.0
Windows 10 v1903 build 18912.1001
|
True
|
SetupVPN compatibility issue. - ### Steps to reproduce
1. Install SetupVPN extension: https://chrome.google.com/webstore/detail/setupvpn-lifetime-free-vp/oofgbpoabipfcfjapgnbbjjaenockbdp
2. Login with email adguard@vomoto.com and password adguard
### Expected behavior
The extension should find a list of servers to connect.
### Actual behavior
The extension keeps trying to find a server forever.
### Customer ID
2170400
### Your environment
Chrome Canary 77.0.3823.0
Windows 10 v1903 build 18912.1001
|
comp
|
setupvpn compatibility issue steps to reproduce install setupvpn extension login with email adguard vomoto com and password adguard expected behavior the extension should find a list of servers to connect actual behavior the extension keeps trying to find a server forever customer id your environment chrome canary windows build
| 1
|
2,825
| 5,627,694,112
|
IssuesEvent
|
2017-04-05 02:40:27
|
dotnet/corefx
|
https://api.github.com/repos/dotnet/corefx
|
closed
|
HttpClient/HttpContent application deadlock
|
area-System.Net.Http bug needs more info tenet-compatibility
|
Platform: Ubuntu 16.04 and Windows
Core CLR: 1.1.0
In my application (An MVC based micro-service) I make repeating, large POST requests to a local service. The response of each request is several megabytes of JSON. I am using tasks to execute these request in parallel, my control flow is completly async. After a variable number (usually not the same number) of requests the application will deadlock becoming completely unresponsive to outside requests. The number of requests can be variable, but the time before deadlock changes when the size of the requested parameters is changed. The larger the requests, the the quicker the deadlock. The HttpClient object is shared among the requests.
I have isolated the failure to:
await hrm.Content.CopyToAsync(ms).ConfigureAwait(false);
Where Hrm is the HttpResponse message allocated by:
HttpResponseMessage hrm = await hc.SendAsync(hqm, HttpCompletionOption.ResponseHeadersRead).ConfigureAwait(false)
After spending quite a bit of time, I have reverted the projects' CLR version to 1.0.3 from 1.1.0 and it appears that the deadlock is no longer present - or at least takes much more time to manifest itself.
Since I am developing on Linux I have been unable to do any useful debugging of this issue after the deadlock - I am not able to connect to the process after it deadlocks. I cannot share my source directly but I could privately.
|
True
|
HttpClient/HttpContent application deadlock - Platform: Ubuntu 16.04 and Windows
Core CLR: 1.1.0
In my application (An MVC based micro-service) I make repeating, large POST requests to a local service. The response of each request is several megabytes of JSON. I am using tasks to execute these request in parallel, my control flow is completly async. After a variable number (usually not the same number) of requests the application will deadlock becoming completely unresponsive to outside requests. The number of requests can be variable, but the time before deadlock changes when the size of the requested parameters is changed. The larger the requests, the the quicker the deadlock. The HttpClient object is shared among the requests.
I have isolated the failure to:
await hrm.Content.CopyToAsync(ms).ConfigureAwait(false);
Where Hrm is the HttpResponse message allocated by:
HttpResponseMessage hrm = await hc.SendAsync(hqm, HttpCompletionOption.ResponseHeadersRead).ConfigureAwait(false)
After spending quite a bit of time, I have reverted the projects' CLR version to 1.0.3 from 1.1.0 and it appears that the deadlock is no longer present - or at least takes much more time to manifest itself.
Since I am developing on Linux I have been unable to do any useful debugging of this issue after the deadlock - I am not able to connect to the process after it deadlocks. I cannot share my source directly but I could privately.
|
comp
|
httpclient httpcontent application deadlock platform ubuntu and windows core clr in my application an mvc based micro service i make repeating large post requests to a local service the response of each request is several megabytes of json i am using tasks to execute these request in parallel my control flow is completly async after a variable number usually not the same number of requests the application will deadlock becoming completely unresponsive to outside requests the number of requests can be variable but the time before deadlock changes when the size of the requested parameters is changed the larger the requests the the quicker the deadlock the httpclient object is shared among the requests i have isolated the failure to await hrm content copytoasync ms configureawait false where hrm is the httpresponse message allocated by httpresponsemessage hrm await hc sendasync hqm httpcompletionoption responseheadersread configureawait false after spending quite a bit of time i have reverted the projects clr version to from and it appears that the deadlock is no longer present or at least takes much more time to manifest itself since i am developing on linux i have been unable to do any useful debugging of this issue after the deadlock i am not able to connect to the process after it deadlocks i cannot share my source directly but i could privately
| 1
|
3,156
| 6,078,191,910
|
IssuesEvent
|
2017-06-16 07:29:38
|
sass/sass
|
https://api.github.com/repos/sass/sass
|
closed
|
Deprecate old-style property syntax
|
Dart Sass Compatibility Requires Deprecation
|
The old property syntax looks janky and we don't intend to support it in Dart Sass, so we should deprecate it here.
Tasks:
- [x] Deprecate existing behavior in `stable`.
- [x] Remove behavior from `master`.
|
True
|
Deprecate old-style property syntax - The old property syntax looks janky and we don't intend to support it in Dart Sass, so we should deprecate it here.
Tasks:
- [x] Deprecate existing behavior in `stable`.
- [x] Remove behavior from `master`.
|
comp
|
deprecate old style property syntax the old property syntax looks janky and we don t intend to support it in dart sass so we should deprecate it here tasks deprecate existing behavior in stable remove behavior from master
| 1
|
14,124
| 16,986,952,699
|
IssuesEvent
|
2021-06-30 15:22:06
|
ckeditor/ckeditor5
|
https://api.github.com/repos/ckeditor/ckeditor5
|
closed
|
Find and replace feature
|
Epic domain:v4-compatibility package:find-and-replace squad:green status:discussion support:2 type:feature
|
🆕 Feature request
## CKEditor 5
as the title says ..
I googled around and looked in the docs, but nothing on document search
if not how can I / we make one?
Thanks!
---
If you'd like this feature to be introduced add 👍 to this ticket.
|
True
|
Find and replace feature - 🆕 Feature request
## CKEditor 5
as the title says ..
I googled around and looked in the docs, but nothing on document search
if not how can I / we make one?
Thanks!
---
If you'd like this feature to be introduced add 👍 to this ticket.
|
comp
|
find and replace feature 🆕 feature request ckeditor as the title says i googled around and looked in the docs but nothing on document search if not how can i we make one thanks if you d like this feature to be introduced add 👍 to this ticket
| 1
|
14,733
| 18,094,908,094
|
IssuesEvent
|
2021-09-22 07:58:47
|
AdguardTeam/AdguardForWindows
|
https://api.github.com/repos/AdguardTeam/AdguardForWindows
|
closed
|
Conflict with Sophos Endpoint Security and Control
|
bug compatibility confirmed Resolution: Fixed Status: Resolved Version: AdGuard v7.7
|
### Steps to reproduce
1. Install Sophos Endpoint Security and Control
2. Add `##body` for test element hiding or enable legacy Assistant
3. Open any site.
### Expected behavior
Blank page when `##body` enabled
### Actual behavior
AdGuard injections are not working if content filtering enabled in Sophos Endpoint Security and Control 10.8(28928), so element hiding, JS end extensions are not working. Enabling `Use redirect mode` does not help.
### Your environment
<!--- Please include all relevant details about the environment you experienced the bug in -->
* Environment name and version: (e.g. Chrome 59) latest Chrome
* Operating system and version: (e.g. Windows 10, v.1703, build 15063.413) Windows 10
* Antivirus: Sophos Endpoint Security and Control 10.8(28928)
|
True
|
Conflict with Sophos Endpoint Security and Control - ### Steps to reproduce
1. Install Sophos Endpoint Security and Control
2. Add `##body` for test element hiding or enable legacy Assistant
3. Open any site.
### Expected behavior
Blank page when `##body` enabled
### Actual behavior
AdGuard injections are not working if content filtering enabled in Sophos Endpoint Security and Control 10.8(28928), so element hiding, JS end extensions are not working. Enabling `Use redirect mode` does not help.
### Your environment
<!--- Please include all relevant details about the environment you experienced the bug in -->
* Environment name and version: (e.g. Chrome 59) latest Chrome
* Operating system and version: (e.g. Windows 10, v.1703, build 15063.413) Windows 10
* Antivirus: Sophos Endpoint Security and Control 10.8(28928)
|
comp
|
conflict with sophos endpoint security and control steps to reproduce install sophos endpoint security and control add body for test element hiding or enable legacy assistant open any site expected behavior blank page when body enabled actual behavior adguard injections are not working if content filtering enabled in sophos endpoint security and control so element hiding js end extensions are not working enabling use redirect mode does not help your environment environment name and version e g chrome latest chrome operating system and version e g windows v build windows antivirus sophos endpoint security and control
| 1
|
85,022
| 3,683,872,258
|
IssuesEvent
|
2016-02-24 15:33:50
|
cfpb/cfgov-refresh
|
https://api.github.com/repos/cfpb/cfgov-refresh
|
closed
|
Every phone number in contact info appears to be a fax
|
BEWD priority: high
|
In the Contact Info molecule, all the molecules with phone numbers appear to be faxes even when they aren't.
## Steps to replicate
1. Create a sublanding page
2. Create a Contact Info Molecule in the sidebar
3. Select a contact with phone numbers, such as "Submit a complaint" or "Reasonable Accommodation Requests"
Everything will appear under fax. If you move the loop outside of the label, every label will be "Fax", even when no fax numbers are included in the contact.
## Preview
** Result **


**Expected fields**


|
1.0
|
Every phone number in contact info appears to be a fax - In the Contact Info molecule, all the molecules with phone numbers appear to be faxes even when they aren't.
## Steps to replicate
1. Create a sublanding page
2. Create a Contact Info Molecule in the sidebar
3. Select a contact with phone numbers, such as "Submit a complaint" or "Reasonable Accommodation Requests"
Everything will appear under fax. If you move the loop outside of the label, every label will be "Fax", even when no fax numbers are included in the contact.
## Preview
** Result **


**Expected fields**


|
non_comp
|
every phone number in contact info appears to be a fax in the contact info molecule all the molecules with phone numbers appear to be faxes even when they aren t steps to replicate create a sublanding page create a contact info molecule in the sidebar select a contact with phone numbers such as submit a complaint or reasonable accommodation requests everything will appear under fax if you move the loop outside of the label every label will be fax even when no fax numbers are included in the contact preview result expected fields
| 0
|
201,276
| 22,948,155,372
|
IssuesEvent
|
2022-07-19 03:38:42
|
elikkatzgit/TestingPOM
|
https://api.github.com/repos/elikkatzgit/TestingPOM
|
reopened
|
CVE-2018-14720 (High) detected in jackson-databind-2.7.2.jar
|
security vulnerability
|
## CVE-2018-14720 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.7.2.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.7.2.jar** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.7 might allow attackers to conduct external XML entity (XXE) attacks by leveraging failure to block unspecified JDK classes from polymorphic deserialization.
<p>Publish Date: 2019-01-02
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-14720>CVE-2018-14720</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-14720">https://nvd.nist.gov/vuln/detail/CVE-2018-14720</a></p>
<p>Release Date: 2019-01-02</p>
<p>Fix Resolution: 2.7.9.5</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.7.2","packageFilePaths":[],"isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.7.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.7.9.5","isBinary":true}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2018-14720","vulnerabilityDetails":"FasterXML jackson-databind 2.x before 2.9.7 might allow attackers to conduct external XML entity (XXE) attacks by leveraging failure to block unspecified JDK classes from polymorphic deserialization.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-14720","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2018-14720 (High) detected in jackson-databind-2.7.2.jar - ## CVE-2018-14720 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.7.2.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.7.2.jar** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.7 might allow attackers to conduct external XML entity (XXE) attacks by leveraging failure to block unspecified JDK classes from polymorphic deserialization.
<p>Publish Date: 2019-01-02
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-14720>CVE-2018-14720</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-14720">https://nvd.nist.gov/vuln/detail/CVE-2018-14720</a></p>
<p>Release Date: 2019-01-02</p>
<p>Fix Resolution: 2.7.9.5</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.7.2","packageFilePaths":[],"isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.7.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.7.9.5","isBinary":true}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2018-14720","vulnerabilityDetails":"FasterXML jackson-databind 2.x before 2.9.7 might allow attackers to conduct external XML entity (XXE) attacks by leveraging failure to block unspecified JDK classes from polymorphic deserialization.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-14720","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
non_comp
|
cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href dependency hierarchy x jackson databind jar vulnerable library found in base branch master vulnerability details fasterxml jackson databind x before might allow attackers to conduct external xml entity xxe attacks by leveraging failure to block unspecified jdk classes from polymorphic deserialization publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree com fasterxml jackson core jackson databind isminimumfixversionavailable true minimumfixversion isbinary true basebranches vulnerabilityidentifier cve vulnerabilitydetails fasterxml jackson databind x before might allow attackers to conduct external xml entity xxe attacks by leveraging failure to block unspecified jdk classes from polymorphic deserialization vulnerabilityurl
| 0
|
18,844
| 26,186,997,512
|
IssuesEvent
|
2023-01-03 02:42:34
|
Armourers-Workshop/Armourers-Workshop
|
https://api.github.com/repos/Armourers-Workshop/Armourers-Workshop
|
closed
|
[BUG] Hall of Mirrors with Astral Sorcery (+BetterPortals)
|
bug compatibility investigating
|
**Describe the bug**
Looking at the horizon with a reskinned Armourer's Workshop item, even one with no fancy blocks, results in a hall of mirrors effect just over the horizon. The constellations seem to have a faint hall of mirrors effect as well. Additionally, the sun and moon will produce a hall of mirrors effect when it is raining.
**To Reproduce**
Steps to reproduce the behavior:
1. Skin an item
2. hold item in hand
3. observe the sky
4. toggle downfall
**Screenshots**


**Debug crash log**
https://pastebin.com/tHp9BBZQ
**Additional context**
Armourer's Workshop Version 0.49.1.481 was used for the test. However, the behavior was also observed on Version 0.49.1.6.
Edit: It turns out that this only occurs when BetterPortals is installed alongside both mods. A workaround is to use weak rendering in Astral Sorcery's configuration file, but you will not be able to see eclipses if that render is off.
|
True
|
[BUG] Hall of Mirrors with Astral Sorcery (+BetterPortals) - **Describe the bug**
Looking at the horizon with a reskinned Armourer's Workshop item, even one with no fancy blocks, results in a hall of mirrors effect just over the horizon. The constellations seem to have a faint hall of mirrors effect as well. Additionally, the sun and moon will produce a hall of mirrors effect when it is raining.
**To Reproduce**
Steps to reproduce the behavior:
1. Skin an item
2. hold item in hand
3. observe the sky
4. toggle downfall
**Screenshots**


**Debug crash log**
https://pastebin.com/tHp9BBZQ
**Additional context**
Armourer's Workshop Version 0.49.1.481 was used for the test. However, the behavior was also observed on Version 0.49.1.6.
Edit: It turns out that this only occurs when BetterPortals is installed alongside both mods. A workaround is to use weak rendering in Astral Sorcery's configuration file, but you will not be able to see eclipses if that render is off.
|
comp
|
hall of mirrors with astral sorcery betterportals describe the bug looking at the horizon with a reskinned armourer s workshop item even one with no fancy blocks results in a hall of mirrors effect just over the horizon the constellations seem to have a faint hall of mirrors effect as well additionally the sun and moon will produce a hall of mirrors effect when it is raining to reproduce steps to reproduce the behavior skin an item hold item in hand observe the sky toggle downfall screenshots debug crash log additional context armourer s workshop version was used for the test however the behavior was also observed on version edit it turns out that this only occurs when betterportals is installed alongside both mods a workaround is to use weak rendering in astral sorcery s configuration file but you will not be able to see eclipses if that render is off
| 1
|
11,001
| 13,038,868,652
|
IssuesEvent
|
2020-07-28 15:50:26
|
Conflux-Chain/conflux-rust
|
https://api.github.com/repos/Conflux-Chain/conflux-rust
|
closed
|
Panic of "Protocols should have been registered before the HOUSEKEEPING timeout".
|
P3 backward compatible bug
|
This was reported by one miner and on Jenkins once.
@Thegaram I remember you have encountered this before after adding some sleep time, so do you have an idea what's the root cause?
|
True
|
Panic of "Protocols should have been registered before the HOUSEKEEPING timeout". - This was reported by one miner and on Jenkins once.
@Thegaram I remember you have encountered this before after adding some sleep time, so do you have an idea what's the root cause?
|
comp
|
panic of protocols should have been registered before the housekeeping timeout this was reported by one miner and on jenkins once thegaram i remember you have encountered this before after adding some sleep time so do you have an idea what s the root cause
| 1
|
119,004
| 4,759,301,322
|
IssuesEvent
|
2016-10-24 22:04:59
|
michaeljcalkins/rangersteve-ideas
|
https://api.github.com/repos/michaeljcalkins/rangersteve-ideas
|
opened
|
In DEBUG mode display your cursor position then when you click console log the world position
|
Priority: Medium Status: Accepted Time: > Week Type: Enhancement
|
`{ x: 800, y: 400 }` then you can just paste it in the map config file
- And_re
|
1.0
|
In DEBUG mode display your cursor position then when you click console log the world position - `{ x: 800, y: 400 }` then you can just paste it in the map config file
- And_re
|
non_comp
|
in debug mode display your cursor position then when you click console log the world position x y then you can just paste it in the map config file and re
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.