Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
4
112
repo_url
stringlengths
33
141
action
stringclasses
3 values
title
stringlengths
1
970
labels
stringlengths
4
625
body
stringlengths
3
247k
index
stringclasses
9 values
text_combine
stringlengths
96
247k
label
stringclasses
2 values
text
stringlengths
96
218k
binary_label
int64
0
1
28,623
13,764,327,917
IssuesEvent
2020-10-07 11:54:57
ppy/osu
https://api.github.com/repos/ppy/osu
closed
osu! freezes within a beatmap
type:performance
Hi there, I just tested out osu lazer for a bit and came across some freezes in which the whole screen freezes for a second or more in the normal osu mode. I don't know what's causing the issue, maybe something with the database (?) as I had a few errors importing my beatmaps. Windows 10 (1709, x64) Intel Core i7 6800k Readon RX Vega 64 (driver: 18.3.4) osu version: 2018.423.0 using a guest account I hope those [logs](https://github.com/ppy/osu/files/1949392/logs.zip) help to fix the bug and get one step closer to a stable osu lazer release. ;) (I just put in all logs from the whole session... Performance logging features got enabled after I first noticed the problem)
True
osu! freezes within a beatmap - Hi there, I just tested out osu lazer for a bit and came across some freezes in which the whole screen freezes for a second or more in the normal osu mode. I don't know what's causing the issue, maybe something with the database (?) as I had a few errors importing my beatmaps. Windows 10 (1709, x64) Intel Core i7 6800k Readon RX Vega 64 (driver: 18.3.4) osu version: 2018.423.0 using a guest account I hope those [logs](https://github.com/ppy/osu/files/1949392/logs.zip) help to fix the bug and get one step closer to a stable osu lazer release. ;) (I just put in all logs from the whole session... Performance logging features got enabled after I first noticed the problem)
perf
osu freezes within a beatmap hi there i just tested out osu lazer for a bit and came across some freezes in which the whole screen freezes for a second or more in the normal osu mode i don t know what s causing the issue maybe something with the database as i had a few errors importing my beatmaps windows intel core readon rx vega driver osu version using a guest account i hope those help to fix the bug and get one step closer to a stable osu lazer release i just put in all logs from the whole session performance logging features got enabled after i first noticed the problem
1
506,648
14,669,807,730
IssuesEvent
2020-12-30 02:17:03
GrapheneOS/os_issue_tracker
https://api.github.com/repos/GrapheneOS/os_issue_tracker
closed
roll back changes to vndk ABI caused by churn from compiler hardening
enhancement low-priority upstream
This seems to have been triggered by enabling -ftrivial-auto-var-init=zero. It doesn't seem like anything is actually wrong but rather a lot of non-public things leak into the C++ ABI potentially due to them not using -fvisibility=hidden.
1.0
roll back changes to vndk ABI caused by churn from compiler hardening - This seems to have been triggered by enabling -ftrivial-auto-var-init=zero. It doesn't seem like anything is actually wrong but rather a lot of non-public things leak into the C++ ABI potentially due to them not using -fvisibility=hidden.
non_perf
roll back changes to vndk abi caused by churn from compiler hardening this seems to have been triggered by enabling ftrivial auto var init zero it doesn t seem like anything is actually wrong but rather a lot of non public things leak into the c abi potentially due to them not using fvisibility hidden
0
18,258
10,053,584,003
IssuesEvent
2019-07-21 18:00:21
raoulvdberge/refinedstorage
https://api.github.com/repos/raoulvdberge/refinedstorage
closed
Server crash-loop
Performance
#### Issue description: Server goes into crash-loop #### What happens: On boot, server hangs for a bit, then the watchdog service reboots the server for ticks taking longer than 60 seconds. #### What you expected to happen: No crashing #### Steps to reproduce: 1.boot server 2.wait a minute 3. server reboots ... #### Version (make sure you are on the latest version before reporting): - Minecraft: 1.12.2 - Forge: 14.23.5.2775 (and also 14.23.5.2768 tested both) - Refined Storage: 1.6.9 Does this issue occur on a server? [yes/no] yes #### If a (crash)log is relevant for this issue, link it here: https://pastebin.com/gi4KF9iC
True
Server crash-loop - #### Issue description: Server goes into crash-loop #### What happens: On boot, server hangs for a bit, then the watchdog service reboots the server for ticks taking longer than 60 seconds. #### What you expected to happen: No crashing #### Steps to reproduce: 1.boot server 2.wait a minute 3. server reboots ... #### Version (make sure you are on the latest version before reporting): - Minecraft: 1.12.2 - Forge: 14.23.5.2775 (and also 14.23.5.2768 tested both) - Refined Storage: 1.6.9 Does this issue occur on a server? [yes/no] yes #### If a (crash)log is relevant for this issue, link it here: https://pastebin.com/gi4KF9iC
perf
server crash loop issue description server goes into crash loop what happens on boot server hangs for a bit then the watchdog service reboots the server for ticks taking longer than seconds what you expected to happen no crashing steps to reproduce boot server wait a minute server reboots version make sure you are on the latest version before reporting minecraft forge and also tested both refined storage does this issue occur on a server yes if a crash log is relevant for this issue link it here
1
9,114
6,767,111,023
IssuesEvent
2017-10-26 01:10:36
ianstormtaylor/slate
https://api.github.com/repos/ianstormtaylor/slate
closed
optimize `state.toJSON` performance by being lazier
improvement ⚑ performance
Right now when we do `state.toJSON()`, we serialize all the potential properties of the state, and then delete the ones that shouldn't be included. This is obviously the slower way, we should add in properties if the options call for it instead.
True
optimize `state.toJSON` performance by being lazier - Right now when we do `state.toJSON()`, we serialize all the potential properties of the state, and then delete the ones that shouldn't be included. This is obviously the slower way, we should add in properties if the options call for it instead.
perf
optimize state tojson performance by being lazier right now when we do state tojson we serialize all the potential properties of the state and then delete the ones that shouldn t be included this is obviously the slower way we should add in properties if the options call for it instead
1
23,453
11,966,281,074
IssuesEvent
2020-04-06 02:52:11
Quarantine-Help/quarantine-hybrid-app
https://api.github.com/repos/Quarantine-Help/quarantine-hybrid-app
opened
Modularize the app and implement SelectivePreloadingStrategy
performance refactor
Group the page into modules and consolidate routing. The proposed hierarchy needs to be updated with the latest workflow changes before being implemented. ### Proposed Module Hierarchy **OnboardModule** * Landing pages * Registration pages for Volunteer & Quarantined **MainModule** * Maps page * Request creation pages * Request handling pages **MiscModule** * My Profile pages - edit/save ## Additional Context Go through the workflow and discuss on slack https://xd.adobe.com/spec/6a2c4d00-a356-4885-5e8d-a4379323a760-1c1d/grid/ **Refer** https://ionicframework.com/docs/angular/navigation#lazy-loading-routes https://ionicframework.com/blog/how-to-lazy-load-in-ionic-angular/
True
Modularize the app and implement SelectivePreloadingStrategy - Group the page into modules and consolidate routing. The proposed hierarchy needs to be updated with the latest workflow changes before being implemented. ### Proposed Module Hierarchy **OnboardModule** * Landing pages * Registration pages for Volunteer & Quarantined **MainModule** * Maps page * Request creation pages * Request handling pages **MiscModule** * My Profile pages - edit/save ## Additional Context Go through the workflow and discuss on slack https://xd.adobe.com/spec/6a2c4d00-a356-4885-5e8d-a4379323a760-1c1d/grid/ **Refer** https://ionicframework.com/docs/angular/navigation#lazy-loading-routes https://ionicframework.com/blog/how-to-lazy-load-in-ionic-angular/
perf
modularize the app and implement selectivepreloadingstrategy group the page into modules and consolidate routing the proposed hierarchy needs to be updated with the latest workflow changes before being implemented proposed module hierarchy onboardmodule landing pages registration pages for volunteer quarantined mainmodule maps page request creation pages request handling pages miscmodule my profile pages edit save additional context go through the workflow and discuss on slack refer
1
56,357
31,884,564,610
IssuesEvent
2023-09-16 19:48:15
dotnet/runtime
https://api.github.com/repos/dotnet/runtime
opened
Very slow in those type of calcs
tenet-performance
Net (all versions, 6, 7 and coming 8 are all very slow compared to other languages) I do not expect to be as fast as e.g. rust, but being 10x slower than slow python is a bit of a shame: https://programming-language-benchmarks.vercel.app/problem/edigits I'm not sure if this will work there, but this is a suggestion for possible optimisations. Hope that version 9 will do something with that.
True
Very slow in those type of calcs - Net (all versions, 6, 7 and coming 8 are all very slow compared to other languages) I do not expect to be as fast as e.g. rust, but being 10x slower than slow python is a bit of a shame: https://programming-language-benchmarks.vercel.app/problem/edigits I'm not sure if this will work there, but this is a suggestion for possible optimisations. Hope that version 9 will do something with that.
perf
very slow in those type of calcs net all versions and coming are all very slow compared to other languages i do not expect to be as fast as e g rust but being slower than slow python is a bit of a shame i m not sure if this will work there but this is a suggestion for possible optimisations hope that version will do something with that
1
26,568
13,055,871,729
IssuesEvent
2020-07-30 02:59:07
fused-effects/fused-effects
https://api.github.com/repos/fused-effects/fused-effects
closed
Manually-fused RWST carrier is significantly faster than StateC+ReaderC+WriterC
bug performance
Given [this RWSC carrier](https://gist.github.com/patrickt/0ea924b742bf675b3b1d47cf4091d720) and the following computation: ```haskell go :: ( Member (State Int) sig , Member (Writer String) sig , Member (Reader Bool) sig , Carrier sig m ) => Int -> m () go n = forM_ (take n (cycle names)) $ \str -> do let len = length str modify @Int (+ len) curr <- get @Int when (curr < n) $ do should <- ask let str' = if should then reverse str else str tell str' ``` interpreting it with RWSC is around twice as fast as StateC+ReaderC+WriterC: ``` benchmarked fused-effects/Separate carriers/10 time 480.1 ns (461.4 ns .. 501.3 ns) 0.980 R² (0.967 R² .. 0.990 R²) mean 515.3 ns (502.4 ns .. 537.7 ns) std dev 53.87 ns (39.49 ns .. 80.78 ns) variance introduced by outliers: 63% (severely inflated) benchmarked fused-effects/Separate carriers/100 time 4.781 μs (4.712 μs .. 4.879 μs) 0.995 R² (0.988 R² .. 0.999 R²) mean 4.767 μs (4.720 μs .. 4.844 μs) std dev 199.3 ns (136.2 ns .. 321.1 ns) variance introduced by outliers: 23% (moderately inflated) benchmarked fused-effects/Separate carriers/1000 time 50.35 μs (48.92 μs .. 51.66 μs) 0.992 R² (0.985 R² .. 0.997 R²) mean 49.53 μs (48.85 μs .. 50.44 μs) std dev 2.588 μs (1.973 μs .. 3.720 μs) variance introduced by outliers: 30% (moderately inflated) benchmarked fused-effects/RWST carrier/10 time 336.0 ns (317.5 ns .. 361.3 ns) 0.971 R² (0.951 R² .. 0.994 R²) mean 322.9 ns (317.0 ns .. 332.4 ns) std dev 25.15 ns (16.85 ns .. 36.78 ns) variance introduced by outliers: 50% (moderately inflated) benchmarked fused-effects/RWST carrier/100 time 2.664 μs (2.586 μs .. 2.723 μs) 0.993 R² (0.982 R² .. 0.999 R²) mean 2.928 μs (2.858 μs .. 3.105 μs) std dev 379.1 ns (137.0 ns .. 685.6 ns) variance introduced by outliers: 74% (severely inflated) benchmarked fused-effects/RWST carrier/1000 time 28.39 μs (28.01 μs .. 28.94 μs) 0.997 R² (0.995 R² .. 0.999 R²) mean 28.49 μs (28.30 μs .. 28.75 μs) std dev 753.7 ns (584.7 ns .. 1.040 μs) variance introduced by outliers: 11% (moderately inflated) ``` I would expect to see a small speedup using RWSC (`mtl` versions of similar code are around 10% faster with RWS). This is much too large and makes me think that we might not be getting enough fusion.
True
Manually-fused RWST carrier is significantly faster than StateC+ReaderC+WriterC - Given [this RWSC carrier](https://gist.github.com/patrickt/0ea924b742bf675b3b1d47cf4091d720) and the following computation: ```haskell go :: ( Member (State Int) sig , Member (Writer String) sig , Member (Reader Bool) sig , Carrier sig m ) => Int -> m () go n = forM_ (take n (cycle names)) $ \str -> do let len = length str modify @Int (+ len) curr <- get @Int when (curr < n) $ do should <- ask let str' = if should then reverse str else str tell str' ``` interpreting it with RWSC is around twice as fast as StateC+ReaderC+WriterC: ``` benchmarked fused-effects/Separate carriers/10 time 480.1 ns (461.4 ns .. 501.3 ns) 0.980 R² (0.967 R² .. 0.990 R²) mean 515.3 ns (502.4 ns .. 537.7 ns) std dev 53.87 ns (39.49 ns .. 80.78 ns) variance introduced by outliers: 63% (severely inflated) benchmarked fused-effects/Separate carriers/100 time 4.781 μs (4.712 μs .. 4.879 μs) 0.995 R² (0.988 R² .. 0.999 R²) mean 4.767 μs (4.720 μs .. 4.844 μs) std dev 199.3 ns (136.2 ns .. 321.1 ns) variance introduced by outliers: 23% (moderately inflated) benchmarked fused-effects/Separate carriers/1000 time 50.35 μs (48.92 μs .. 51.66 μs) 0.992 R² (0.985 R² .. 0.997 R²) mean 49.53 μs (48.85 μs .. 50.44 μs) std dev 2.588 μs (1.973 μs .. 3.720 μs) variance introduced by outliers: 30% (moderately inflated) benchmarked fused-effects/RWST carrier/10 time 336.0 ns (317.5 ns .. 361.3 ns) 0.971 R² (0.951 R² .. 0.994 R²) mean 322.9 ns (317.0 ns .. 332.4 ns) std dev 25.15 ns (16.85 ns .. 36.78 ns) variance introduced by outliers: 50% (moderately inflated) benchmarked fused-effects/RWST carrier/100 time 2.664 μs (2.586 μs .. 2.723 μs) 0.993 R² (0.982 R² .. 0.999 R²) mean 2.928 μs (2.858 μs .. 3.105 μs) std dev 379.1 ns (137.0 ns .. 685.6 ns) variance introduced by outliers: 74% (severely inflated) benchmarked fused-effects/RWST carrier/1000 time 28.39 μs (28.01 μs .. 28.94 μs) 0.997 R² (0.995 R² .. 0.999 R²) mean 28.49 μs (28.30 μs .. 28.75 μs) std dev 753.7 ns (584.7 ns .. 1.040 μs) variance introduced by outliers: 11% (moderately inflated) ``` I would expect to see a small speedup using RWSC (`mtl` versions of similar code are around 10% faster with RWS). This is much too large and makes me think that we might not be getting enough fusion.
perf
manually fused rwst carrier is significantly faster than statec readerc writerc given and the following computation haskell go member state int sig member writer string sig member reader bool sig carrier sig m int m go n form take n cycle names str do let len length str modify int len curr get int when curr n do should ask let str if should then reverse str else str tell str interpreting it with rwsc is around twice as fast as statec readerc writerc benchmarked fused effects separate carriers time ns ns ns r² r² r² mean ns ns ns std dev ns ns ns variance introduced by outliers severely inflated benchmarked fused effects separate carriers time μs μs μs r² r² r² mean μs μs μs std dev ns ns ns variance introduced by outliers moderately inflated benchmarked fused effects separate carriers time μs μs μs r² r² r² mean μs μs μs std dev μs μs μs variance introduced by outliers moderately inflated benchmarked fused effects rwst carrier time ns ns ns r² r² r² mean ns ns ns std dev ns ns ns variance introduced by outliers moderately inflated benchmarked fused effects rwst carrier time μs μs μs r² r² r² mean μs μs μs std dev ns ns ns variance introduced by outliers severely inflated benchmarked fused effects rwst carrier time μs μs μs r² r² r² mean μs μs μs std dev ns ns μs variance introduced by outliers moderately inflated i would expect to see a small speedup using rwsc mtl versions of similar code are around faster with rws this is much too large and makes me think that we might not be getting enough fusion
1
285,518
8,761,701,155
IssuesEvent
2018-12-16 20:05:49
FSPNet/Orion
https://api.github.com/repos/FSPNet/Orion
opened
HTTP Method shouldn't be GET
🐛 Bug 🔖 Version/1.0 🚨 Priority/P0
**Describe the bug** HTTP Method shouldn't be GET, it should be POST, **Expected behavior** Change some GET routes to POST. like 'warband', 'factorio', **Environment (please complete the following information):** - Orion version(s): 1.0.2
1.0
HTTP Method shouldn't be GET - **Describe the bug** HTTP Method shouldn't be GET, it should be POST, **Expected behavior** Change some GET routes to POST. like 'warband', 'factorio', **Environment (please complete the following information):** - Orion version(s): 1.0.2
non_perf
http method shouldn t be get describe the bug http method shouldn t be get it should be post expected behavior change some get routes to post like warband factorio environment please complete the following information orion version s
0
197,997
14,953,083,623
IssuesEvent
2021-01-26 16:16:22
pints-team/pints
https://api.github.com/repos/pints-team/pints
closed
Add value-based (numerical) tests for all samplers / optimisers
unit-testing
E.g. - Seed - Run 100 iterations - Check that there's sufficient change within those iterations (and reduce n if possible) - Store output, either in CSV or in code - Compare This would be _in addition to_ functional testing, and would be slightly annoying because you'd need to update the stored results any time you made changes. But probably still good to have to check the impact of e.g. refactoring Thoughts @ben18785 @fcooper8472 @martinjrobins @DavAug @rcw5890 ?
1.0
Add value-based (numerical) tests for all samplers / optimisers - E.g. - Seed - Run 100 iterations - Check that there's sufficient change within those iterations (and reduce n if possible) - Store output, either in CSV or in code - Compare This would be _in addition to_ functional testing, and would be slightly annoying because you'd need to update the stored results any time you made changes. But probably still good to have to check the impact of e.g. refactoring Thoughts @ben18785 @fcooper8472 @martinjrobins @DavAug @rcw5890 ?
non_perf
add value based numerical tests for all samplers optimisers e g seed run iterations check that there s sufficient change within those iterations and reduce n if possible store output either in csv or in code compare this would be in addition to functional testing and would be slightly annoying because you d need to update the stored results any time you made changes but probably still good to have to check the impact of e g refactoring thoughts martinjrobins davaug
0
37,990
18,871,891,643
IssuesEvent
2021-11-13 10:23:46
cockroachdb/cockroach
https://api.github.com/repos/cockroachdb/cockroach
reopened
sql: don't touch ranges unnecessarily during limited scans
C-performance A-sql-execution T-sql-queries
A user saw a case where selecting all rows from a (small) partitioned table was significantly faster than selecting one row using LIMIT 1. The reason was that the first region was also the farthest away, and it only had one row. There is a single TableReader planned in that region (because of the limit); the kv fetcher requests two keys in this case, and the scan ends up going to a range on another region. In general, we need to fetch one more key so that we're sure we got all keys for a row (if there are multiple column families). But we could get the same signal if we knew that we hit the end of a range. ~Having a KV API that allows stopping a scan at the end of the range would be useful here. CC @tbg @andreimatei who have been thinking about the APIs between KV and SQL.~ EDIT (@erikgrinaker): The KV API is available as of #70763.
True
sql: don't touch ranges unnecessarily during limited scans - A user saw a case where selecting all rows from a (small) partitioned table was significantly faster than selecting one row using LIMIT 1. The reason was that the first region was also the farthest away, and it only had one row. There is a single TableReader planned in that region (because of the limit); the kv fetcher requests two keys in this case, and the scan ends up going to a range on another region. In general, we need to fetch one more key so that we're sure we got all keys for a row (if there are multiple column families). But we could get the same signal if we knew that we hit the end of a range. ~Having a KV API that allows stopping a scan at the end of the range would be useful here. CC @tbg @andreimatei who have been thinking about the APIs between KV and SQL.~ EDIT (@erikgrinaker): The KV API is available as of #70763.
perf
sql don t touch ranges unnecessarily during limited scans a user saw a case where selecting all rows from a small partitioned table was significantly faster than selecting one row using limit the reason was that the first region was also the farthest away and it only had one row there is a single tablereader planned in that region because of the limit the kv fetcher requests two keys in this case and the scan ends up going to a range on another region in general we need to fetch one more key so that we re sure we got all keys for a row if there are multiple column families but we could get the same signal if we knew that we hit the end of a range having a kv api that allows stopping a scan at the end of the range would be useful here cc tbg andreimatei who have been thinking about the apis between kv and sql edit erikgrinaker the kv api is available as of
1
23,460
11,887,072,937
IssuesEvent
2020-03-28 00:02:55
microsoft/vscode-cpptools
https://api.github.com/repos/microsoft/vscode-cpptools
closed
Compiler path with spaces produces error
Feature: Configuration Language Service bug quick fix regression
Related topic https://community.platformio.org/t/platform-io-compiler-error/12684 1) We provide a full path to the compiler using `compilerPath` option 2) This a path option, so we do not do any modifications because arguments are passed to `compilerArgs` Yes, we can escape `compilerPath` by default but it looks like a bug. Thanks! /cc @valeros @sean-mcmanus
1.0
Compiler path with spaces produces error - Related topic https://community.platformio.org/t/platform-io-compiler-error/12684 1) We provide a full path to the compiler using `compilerPath` option 2) This a path option, so we do not do any modifications because arguments are passed to `compilerArgs` Yes, we can escape `compilerPath` by default but it looks like a bug. Thanks! /cc @valeros @sean-mcmanus
non_perf
compiler path with spaces produces error related topic we provide a full path to the compiler using compilerpath option this a path option so we do not do any modifications because arguments are passed to compilerargs yes we can escape compilerpath by default but it looks like a bug thanks cc valeros sean mcmanus
0
12,872
8,029,252,919
IssuesEvent
2018-07-27 15:25:07
cockroachdb/cockroach
https://api.github.com/repos/cockroachdb/cockroach
opened
storage: Raft not committing new entries incrementally
A-core-replication C-performance
A cluster that got into a weird state has revealed something that doesn't appear to be working correctly in raft. The cluster was running 2.0.4 with the patch in https://github.com/cockroachdb/cockroach/issues/27804#issuecomment-406635478 A range had gotten into a state with only two replicas, and one of them had been down for a long time. The live node was left as leader and continually added to its raft log. After the downed follower came back online, that multi-GB raft log had to be copied to the follower before new progress could be made (including establishing a new lease for the range or adding a third replica). This process went extremely slowly, taking 30 hours in one case to transfer a few GB. While this was happening, the range status page showed that the follower's Last Index was increasing steadily, but the Commit and Applied indexes remained constant until the follower caught up completely. This is unexpected; the Commit and Applied indexes should increase as the follower progresses, tracking only a few message round trips behind the Last index. Note that raft leadership was stable during this time (term number was only 141), so it was not the case that elections were being called frequently (which could slow down the leader and prevent it from committing entries as it processes MsgAppResps).
True
storage: Raft not committing new entries incrementally - A cluster that got into a weird state has revealed something that doesn't appear to be working correctly in raft. The cluster was running 2.0.4 with the patch in https://github.com/cockroachdb/cockroach/issues/27804#issuecomment-406635478 A range had gotten into a state with only two replicas, and one of them had been down for a long time. The live node was left as leader and continually added to its raft log. After the downed follower came back online, that multi-GB raft log had to be copied to the follower before new progress could be made (including establishing a new lease for the range or adding a third replica). This process went extremely slowly, taking 30 hours in one case to transfer a few GB. While this was happening, the range status page showed that the follower's Last Index was increasing steadily, but the Commit and Applied indexes remained constant until the follower caught up completely. This is unexpected; the Commit and Applied indexes should increase as the follower progresses, tracking only a few message round trips behind the Last index. Note that raft leadership was stable during this time (term number was only 141), so it was not the case that elections were being called frequently (which could slow down the leader and prevent it from committing entries as it processes MsgAppResps).
perf
storage raft not committing new entries incrementally a cluster that got into a weird state has revealed something that doesn t appear to be working correctly in raft the cluster was running with the patch in a range had gotten into a state with only two replicas and one of them had been down for a long time the live node was left as leader and continually added to its raft log after the downed follower came back online that multi gb raft log had to be copied to the follower before new progress could be made including establishing a new lease for the range or adding a third replica this process went extremely slowly taking hours in one case to transfer a few gb while this was happening the range status page showed that the follower s last index was increasing steadily but the commit and applied indexes remained constant until the follower caught up completely this is unexpected the commit and applied indexes should increase as the follower progresses tracking only a few message round trips behind the last index note that raft leadership was stable during this time term number was only so it was not the case that elections were being called frequently which could slow down the leader and prevent it from committing entries as it processes msgappresps
1
176,051
13,625,085,484
IssuesEvent
2020-09-24 09:00:10
dart-lang/sdk
https://api.github.com/repos/dart-lang/sdk
closed
[Test] Test runner doesn't seem to honor weak/strong flag
area-test
See detailed transcript below, but in summary: Running test.py on a test with `--nnbd=weak` passed as an argument results in this command line: ``` DART_CONFIGURATION=ReleaseX64 xcodebuild/ReleaseX64/dart --enable-experiment=non-nullable --ignore-unrecognized-flags --packages=/Users/leafp/src/dart-repo/sdk/.packages /Users/leafp/src/dart-repo/sdk/tests/language/nnbd/normalization/generic_function_type_object_normalization_test.dart ``` Note that this command line does not set the mode to weak mode. Running with `--nnbd=strong` results in the same command line. cc @munificent @sortie ``` leafp-macbookpro:sdk leafp$ python tools/test.py -c dartk -m release --enable-experiment=non-nullable --nnbd=weak tests/language/nnbd/normalization/generic_function_type_object_normalization_test.dart Test configuration: custom configuration(architecture: x64, compiler: dartk, mode: release, runtime: vm, system: mac, nnbd: weak, enable-experiment: [non-nullable]) Suites tested: language FAILED: dartk-vm release_x64 language/nnbd/normalization/generic_function_type_object_normalization_test Expected: Pass Actual: RuntimeError --- Command "vm" (took 602ms): DART_CONFIGURATION=ReleaseX64 xcodebuild/ReleaseX64/dart --enable-experiment=non-nullable --ignore-unrecognized-flags --packages=/Users/leafp/src/dart-repo/sdk/.packages /Users/leafp/src/dart-repo/sdk/tests/language/nnbd/normalization/generic_function_type_object_normalization_test.dart exit code: 255 stderr: Unhandled exception: Expect.notEquals(unexpected: <<R0 extends Future<Never>, R1 extends Never, R2 extends Null>() => Null>, actual:<<R0 extends Future<Never>, R1 extends Never, R2 extends Null>() => Null>) fails. #0 Expect._fail (package:expect/expect.dart:685:5) #1 Expect.notEquals (package:expect/expect.dart:306:5) #2 checkNotEquals2 (file:///Users/leafp/src/dart-repo/sdk/tests/language/nnbd/normalization/type_builder.dart:136:10) #3 checkTypeNotEquals2 (file:///Users/leafp/src/dart-repo/sdk/tests/language/nnbd/normalization/generic_function_type_object_normalization_test.dart:22:3) #4 neverBoundTests (file:///Users/leafp/src/dart-repo/sdk/tests/language/nnbd/normalization/generic_function_type_object_normalization_test.dart:175:5) #5 main (file:///Users/leafp/src/dart-repo/sdk/tests/language/nnbd/normalization/generic_function_type_object_normalization_test.dart:199:3) #6 _startIsolate.<anonymous closure> (dart:isolate-patch/isolate_patch.dart:301:19) #7 _RawReceivePortImpl._handleMessage (dart:isolate-patch/isolate_patch.dart:168:12) --- Re-run this test: python tools/test.py -m release -c dartk --nnbd weak --enable-experiment non-nullable language/nnbd/normalization/generic_function_type_object_normalization_test [00:00 | 100% | + 0 | - 1] === 0 tests passed, 1 failed === ```
1.0
[Test] Test runner doesn't seem to honor weak/strong flag - See detailed transcript below, but in summary: Running test.py on a test with `--nnbd=weak` passed as an argument results in this command line: ``` DART_CONFIGURATION=ReleaseX64 xcodebuild/ReleaseX64/dart --enable-experiment=non-nullable --ignore-unrecognized-flags --packages=/Users/leafp/src/dart-repo/sdk/.packages /Users/leafp/src/dart-repo/sdk/tests/language/nnbd/normalization/generic_function_type_object_normalization_test.dart ``` Note that this command line does not set the mode to weak mode. Running with `--nnbd=strong` results in the same command line. cc @munificent @sortie ``` leafp-macbookpro:sdk leafp$ python tools/test.py -c dartk -m release --enable-experiment=non-nullable --nnbd=weak tests/language/nnbd/normalization/generic_function_type_object_normalization_test.dart Test configuration: custom configuration(architecture: x64, compiler: dartk, mode: release, runtime: vm, system: mac, nnbd: weak, enable-experiment: [non-nullable]) Suites tested: language FAILED: dartk-vm release_x64 language/nnbd/normalization/generic_function_type_object_normalization_test Expected: Pass Actual: RuntimeError --- Command "vm" (took 602ms): DART_CONFIGURATION=ReleaseX64 xcodebuild/ReleaseX64/dart --enable-experiment=non-nullable --ignore-unrecognized-flags --packages=/Users/leafp/src/dart-repo/sdk/.packages /Users/leafp/src/dart-repo/sdk/tests/language/nnbd/normalization/generic_function_type_object_normalization_test.dart exit code: 255 stderr: Unhandled exception: Expect.notEquals(unexpected: <<R0 extends Future<Never>, R1 extends Never, R2 extends Null>() => Null>, actual:<<R0 extends Future<Never>, R1 extends Never, R2 extends Null>() => Null>) fails. #0 Expect._fail (package:expect/expect.dart:685:5) #1 Expect.notEquals (package:expect/expect.dart:306:5) #2 checkNotEquals2 (file:///Users/leafp/src/dart-repo/sdk/tests/language/nnbd/normalization/type_builder.dart:136:10) #3 checkTypeNotEquals2 (file:///Users/leafp/src/dart-repo/sdk/tests/language/nnbd/normalization/generic_function_type_object_normalization_test.dart:22:3) #4 neverBoundTests (file:///Users/leafp/src/dart-repo/sdk/tests/language/nnbd/normalization/generic_function_type_object_normalization_test.dart:175:5) #5 main (file:///Users/leafp/src/dart-repo/sdk/tests/language/nnbd/normalization/generic_function_type_object_normalization_test.dart:199:3) #6 _startIsolate.<anonymous closure> (dart:isolate-patch/isolate_patch.dart:301:19) #7 _RawReceivePortImpl._handleMessage (dart:isolate-patch/isolate_patch.dart:168:12) --- Re-run this test: python tools/test.py -m release -c dartk --nnbd weak --enable-experiment non-nullable language/nnbd/normalization/generic_function_type_object_normalization_test [00:00 | 100% | + 0 | - 1] === 0 tests passed, 1 failed === ```
non_perf
test runner doesn t seem to honor weak strong flag see detailed transcript below but in summary running test py on a test with nnbd weak passed as an argument results in this command line dart configuration xcodebuild dart enable experiment non nullable ignore unrecognized flags packages users leafp src dart repo sdk packages users leafp src dart repo sdk tests language nnbd normalization generic function type object normalization test dart note that this command line does not set the mode to weak mode running with nnbd strong results in the same command line cc munificent sortie leafp macbookpro sdk leafp python tools test py c dartk m release enable experiment non nullable nnbd weak tests language nnbd normalization generic function type object normalization test dart test configuration custom configuration architecture compiler dartk mode release runtime vm system mac nnbd weak enable experiment suites tested language failed dartk vm release language nnbd normalization generic function type object normalization test expected pass actual runtimeerror command vm took dart configuration xcodebuild dart enable experiment non nullable ignore unrecognized flags packages users leafp src dart repo sdk packages users leafp src dart repo sdk tests language nnbd normalization generic function type object normalization test dart exit code stderr unhandled exception expect notequals unexpected extends never extends null null actual extends never extends null null fails expect fail package expect expect dart expect notequals package expect expect dart file users leafp src dart repo sdk tests language nnbd normalization type builder dart file users leafp src dart repo sdk tests language nnbd normalization generic function type object normalization test dart neverboundtests file users leafp src dart repo sdk tests language nnbd normalization generic function type object normalization test dart main file users leafp src dart repo sdk tests language nnbd normalization generic function type object normalization test dart startisolate dart isolate patch isolate patch dart rawreceiveportimpl handlemessage dart isolate patch isolate patch dart re run this test python tools test py m release c dartk nnbd weak enable experiment non nullable language nnbd normalization generic function type object normalization test tests passed failed
0
23,646
12,056,063,823
IssuesEvent
2020-04-15 13:56:08
NREL/EnergyPlus
https://api.github.com/repos/NREL/EnergyPlus
opened
Request to change SQL data type for Tabular Data from text to value
Performance
Issue overview -------------- Interface developer notes performance problems accessing SQLite database because the data type for tabular data are strings instead of a value. Heavy use of custom and predefined tabular data makes for large database files. Interface developer feels that they can improve performance during post processing if the data types were changed to "value." I recall that this was tried, by @kbenne, way back when SQLite content was greatly expanded. But I cannot remember what the issue was. Seems like occasionally some special values wouldn't translate okay. Perhaps with an increased focus on performance this could be revisited. ![79130266-4a9d8a80-7d6c-11ea-9fd9-c3bf8a96414b](https://user-images.githubusercontent.com/8754769/79345300-f080fe80-7efe-11ea-813f-dc7df4e7216c.png) ### Details Some additional details for this issue (if relevant): - Platform (Operating system, version) - Version of EnergyPlus (if using an intermediate build, include SHA) - Unmethours link or helpdesk ticket number ### Checklist Add to this list or remove from it as applicable. This is a simple templated set of guidelines. - [ ] Defect file added (list location of defect file here) - [ ] Ticket added to Pivotal for defect (development team task) - [ ] Pull request created (the pull request will have additional tasks related to reviewing changes that fix this defect)
True
Request to change SQL data type for Tabular Data from text to value - Issue overview -------------- Interface developer notes performance problems accessing SQLite database because the data type for tabular data are strings instead of a value. Heavy use of custom and predefined tabular data makes for large database files. Interface developer feels that they can improve performance during post processing if the data types were changed to "value." I recall that this was tried, by @kbenne, way back when SQLite content was greatly expanded. But I cannot remember what the issue was. Seems like occasionally some special values wouldn't translate okay. Perhaps with an increased focus on performance this could be revisited. ![79130266-4a9d8a80-7d6c-11ea-9fd9-c3bf8a96414b](https://user-images.githubusercontent.com/8754769/79345300-f080fe80-7efe-11ea-813f-dc7df4e7216c.png) ### Details Some additional details for this issue (if relevant): - Platform (Operating system, version) - Version of EnergyPlus (if using an intermediate build, include SHA) - Unmethours link or helpdesk ticket number ### Checklist Add to this list or remove from it as applicable. This is a simple templated set of guidelines. - [ ] Defect file added (list location of defect file here) - [ ] Ticket added to Pivotal for defect (development team task) - [ ] Pull request created (the pull request will have additional tasks related to reviewing changes that fix this defect)
perf
request to change sql data type for tabular data from text to value issue overview interface developer notes performance problems accessing sqlite database because the data type for tabular data are strings instead of a value heavy use of custom and predefined tabular data makes for large database files interface developer feels that they can improve performance during post processing if the data types were changed to value i recall that this was tried by kbenne way back when sqlite content was greatly expanded but i cannot remember what the issue was seems like occasionally some special values wouldn t translate okay perhaps with an increased focus on performance this could be revisited details some additional details for this issue if relevant platform operating system version version of energyplus if using an intermediate build include sha unmethours link or helpdesk ticket number checklist add to this list or remove from it as applicable this is a simple templated set of guidelines defect file added list location of defect file here ticket added to pivotal for defect development team task pull request created the pull request will have additional tasks related to reviewing changes that fix this defect
1
316,214
23,619,773,455
IssuesEvent
2022-08-24 19:19:46
Kong/gateway-operator
https://api.github.com/repos/Kong/gateway-operator
opened
Document Supported Gateway Topologies
documentation area/kep area/scalability
### Is there an existing issue for this? - [X] I have searched the existing issues ### Problem Statement Currently in our early alpha stage we support only a single monolithic `Gateway` in that there's only one `DataPlane` behind it, and only one "instance" (`Pod`, to be precise) behind that. We do however have notions of other features which come with some topology considerations: - [ ] hybrid mode https://github.com/Kong/gateway-operator/issues/229 - [ ] horizontal pod scaling https://github.com/Kong/gateway-operator/issues/170 https://github.com/Kong/gateway-operator/issues/171 The purpose of this task is to document the topologies that we expect to support when we go into beta within the [relevant KEP](https://github.com/Kong/gateway-operator/blob/main/keps/0001-managed-gateways.md) to build consensus about this, and for posterity. ### Proposed Solution _No response_ ### Additional information _No response_ ### Acceptance Criteria _No response_
1.0
Document Supported Gateway Topologies - ### Is there an existing issue for this? - [X] I have searched the existing issues ### Problem Statement Currently in our early alpha stage we support only a single monolithic `Gateway` in that there's only one `DataPlane` behind it, and only one "instance" (`Pod`, to be precise) behind that. We do however have notions of other features which come with some topology considerations: - [ ] hybrid mode https://github.com/Kong/gateway-operator/issues/229 - [ ] horizontal pod scaling https://github.com/Kong/gateway-operator/issues/170 https://github.com/Kong/gateway-operator/issues/171 The purpose of this task is to document the topologies that we expect to support when we go into beta within the [relevant KEP](https://github.com/Kong/gateway-operator/blob/main/keps/0001-managed-gateways.md) to build consensus about this, and for posterity. ### Proposed Solution _No response_ ### Additional information _No response_ ### Acceptance Criteria _No response_
non_perf
document supported gateway topologies is there an existing issue for this i have searched the existing issues problem statement currently in our early alpha stage we support only a single monolithic gateway in that there s only one dataplane behind it and only one instance pod to be precise behind that we do however have notions of other features which come with some topology considerations hybrid mode horizontal pod scaling the purpose of this task is to document the topologies that we expect to support when we go into beta within the to build consensus about this and for posterity proposed solution no response additional information no response acceptance criteria no response
0
829,767
31,897,169,510
IssuesEvent
2023-09-18 03:39:59
wso2/api-manager
https://api.github.com/repos/wso2/api-manager
closed
[4.2.0] Upgrade REST API version in Docs
Priority/Highest Component/APIM
This issue is used to track the REST API upgrade in Docs. Major Rest API versions are modified as below publisher : v3--> v4 (major version: v4 , latest version : v4) admin : v3 --> v4 (major version: v4 , latest version : v4) devportal : v2 --> (major version: v3 , latest version : v3)
1.0
[4.2.0] Upgrade REST API version in Docs - This issue is used to track the REST API upgrade in Docs. Major Rest API versions are modified as below publisher : v3--> v4 (major version: v4 , latest version : v4) admin : v3 --> v4 (major version: v4 , latest version : v4) devportal : v2 --> (major version: v3 , latest version : v3)
non_perf
upgrade rest api version in docs this issue is used to track the rest api upgrade in docs major rest api versions are modified as below publisher major version latest version admin major version latest version devportal major version latest version
0
48,373
25,498,732,103
IssuesEvent
2022-11-28 00:22:20
cessen/ropey
https://api.github.com/repos/cessen/ropey
closed
Make Lines iterator more efficient
performance
Currently the `Lines` iterator is roughly equivalent to just calling `Rope::line()` repeatedly with an incrementing index. This is O(log N) for each call to `Lines::next()`, and also is just generally less efficient than it needs to be. This is not only sub-optimal, but also stands out compared to the other iterators which are all O(1) and very fast. It should be possible to also make `Lines` O(1) and just generally more efficient.
True
Make Lines iterator more efficient - Currently the `Lines` iterator is roughly equivalent to just calling `Rope::line()` repeatedly with an incrementing index. This is O(log N) for each call to `Lines::next()`, and also is just generally less efficient than it needs to be. This is not only sub-optimal, but also stands out compared to the other iterators which are all O(1) and very fast. It should be possible to also make `Lines` O(1) and just generally more efficient.
perf
make lines iterator more efficient currently the lines iterator is roughly equivalent to just calling rope line repeatedly with an incrementing index this is o log n for each call to lines next and also is just generally less efficient than it needs to be this is not only sub optimal but also stands out compared to the other iterators which are all o and very fast it should be possible to also make lines o and just generally more efficient
1
24,158
12,226,403,415
IssuesEvent
2020-05-03 10:45:17
returntocorp/semgrep
https://api.github.com/repos/returntocorp/semgrep
closed
Investigate slow semgrep perf
performance
Scanning over https://github.com/sobolevn/python-code-disasters has my CPU pinned at 100% for 2 hours. ``` docker run --rm -v $(pwd):/home/repo returntocorp/sgrep:0.4.9 --json --config=https://sgrep.live/c/r/r2c --skip-pattern-validation ``` Adding as a performance investigation / something to add to our perf regression suite.
True
Investigate slow semgrep perf - Scanning over https://github.com/sobolevn/python-code-disasters has my CPU pinned at 100% for 2 hours. ``` docker run --rm -v $(pwd):/home/repo returntocorp/sgrep:0.4.9 --json --config=https://sgrep.live/c/r/r2c --skip-pattern-validation ``` Adding as a performance investigation / something to add to our perf regression suite.
perf
investigate slow semgrep perf scanning over has my cpu pinned at for hours docker run rm v pwd home repo returntocorp sgrep json config skip pattern validation adding as a performance investigation something to add to our perf regression suite
1
40,875
21,259,429,633
IssuesEvent
2022-04-13 01:21:19
reclosedev/requests-cache
https://api.github.com/repos/reclosedev/requests-cache
opened
Filesystem backend: Add option to store response content in a separate file
enhancement performance
Related to #407. This would help with a few use cases that have come up a few of times now: * Efficiently storing large response contents (without the serialization bottleneck) * Caching responses that primarily contain media or other files you want to access outsize of requests-cache
True
Filesystem backend: Add option to store response content in a separate file - Related to #407. This would help with a few use cases that have come up a few of times now: * Efficiently storing large response contents (without the serialization bottleneck) * Caching responses that primarily contain media or other files you want to access outsize of requests-cache
perf
filesystem backend add option to store response content in a separate file related to this would help with a few use cases that have come up a few of times now efficiently storing large response contents without the serialization bottleneck caching responses that primarily contain media or other files you want to access outsize of requests cache
1
22,410
11,595,707,394
IssuesEvent
2020-02-24 17:30:18
qbittorrent/qBittorrent
https://api.github.com/repos/qbittorrent/qBittorrent
closed
stutters
Duplicate Performance
**Please provide the following information** ### qBittorrent version and Operating System Os: Windows 7 64 Bit Ultimate Qbittorrent 4.1.1 64 Bit ### If on linux, libtorrent and Qt version N/A ### What is the problem the setup menu stops every time I open it. Or, when downloading 1 torrent, it will still be blocked. Videos: https://drive.google.com/open?id=11KTLTG_jUyCCNUyZ9cwcHPZ9IKpD3AfV ### What is the expected behavior Not to stumble. (no stutters!) ### Steps to reproduce 1. Go to settings. 2. click on any of the menus. ### Extra info(if any) 7Gb ram Q6600 processor 64Gb SSD qbittorrent installed and 1Tb Western Digital 7200rpm HDD.
True
stutters - **Please provide the following information** ### qBittorrent version and Operating System Os: Windows 7 64 Bit Ultimate Qbittorrent 4.1.1 64 Bit ### If on linux, libtorrent and Qt version N/A ### What is the problem the setup menu stops every time I open it. Or, when downloading 1 torrent, it will still be blocked. Videos: https://drive.google.com/open?id=11KTLTG_jUyCCNUyZ9cwcHPZ9IKpD3AfV ### What is the expected behavior Not to stumble. (no stutters!) ### Steps to reproduce 1. Go to settings. 2. click on any of the menus. ### Extra info(if any) 7Gb ram Q6600 processor 64Gb SSD qbittorrent installed and 1Tb Western Digital 7200rpm HDD.
perf
stutters please provide the following information qbittorrent version and operating system os windows bit ultimate qbittorrent bit if on linux libtorrent and qt version n a what is the problem the setup menu stops every time i open it or when downloading torrent it will still be blocked videos what is the expected behavior not to stumble no stutters steps to reproduce go to settings click on any of the menus extra info if any ram processor ssd qbittorrent installed and western digital hdd
1
56,665
8,109,213,931
IssuesEvent
2018-08-14 06:36:54
emotion-js/emotion
https://api.github.com/repos/emotion-js/emotion
closed
Browser Support?
documentation question stale
Just curious if there's a spot I wasn't able to find in the docs that lists what browsers are supported by emotion. I just deployed a component that uses it and have a torrent of errors from IE11 saying that it doesn't understand `WeakMap` - is this an oversight, recommended to be polyfilled independently, or intended not to support IE11? Whatever the case, I'd be happy to help out with a PR if that would be useful, just want to make sure I understand the intent here!
1.0
Browser Support? - Just curious if there's a spot I wasn't able to find in the docs that lists what browsers are supported by emotion. I just deployed a component that uses it and have a torrent of errors from IE11 saying that it doesn't understand `WeakMap` - is this an oversight, recommended to be polyfilled independently, or intended not to support IE11? Whatever the case, I'd be happy to help out with a PR if that would be useful, just want to make sure I understand the intent here!
non_perf
browser support just curious if there s a spot i wasn t able to find in the docs that lists what browsers are supported by emotion i just deployed a component that uses it and have a torrent of errors from saying that it doesn t understand weakmap is this an oversight recommended to be polyfilled independently or intended not to support whatever the case i d be happy to help out with a pr if that would be useful just want to make sure i understand the intent here
0
365,907
10,799,585,405
IssuesEvent
2019-11-06 12:31:23
DFO-Ocean-Navigator/Ocean-Data-Map-Project
https://api.github.com/repos/DFO-Ocean-Navigator/Ocean-Data-Map-Project
closed
Enable OpenLayers zoom slider
Javascript New Feature Priority: Low
I'm thinking the version with the slider in between the zoom buttons. ![image](https://user-images.githubusercontent.com/5572045/64077766-f5fe4100-ccad-11e9-97a4-f2e14634f54c.png) Example code: https://openlayers.org/en/latest/examples/zoomslider.html
1.0
Enable OpenLayers zoom slider - I'm thinking the version with the slider in between the zoom buttons. ![image](https://user-images.githubusercontent.com/5572045/64077766-f5fe4100-ccad-11e9-97a4-f2e14634f54c.png) Example code: https://openlayers.org/en/latest/examples/zoomslider.html
non_perf
enable openlayers zoom slider i m thinking the version with the slider in between the zoom buttons example code
0
651,023
21,448,104,383
IssuesEvent
2022-04-25 08:37:06
space-wizards/space-station-14
https://api.github.com/repos/space-wizards/space-station-14
closed
Implement hardsuits properly
Priority: 2-Before Release Issue: Feature Request Difficulty: 2-Medium
<!-- To automatically tag this issue, add the uppercase label(s) surrounded by brackets below, for example: [LABEL] --> ## Description <!-- Explain your issue in detail, including the steps to reproduce it if applicable. Issues without proper explanation are liable to be closed by maintainers.--> Currently hardsuits are two pieces, a helmet and a torso. They are functional in-game currently as they protect against depressurization through PressureProtection.cs. However the hardsuit should be one piece. A torso that when donned puts a "helmet" icon in your hotbar that toggles the helmet of the suit on or off. When the helmet is toggled on, the suit should draw from an oxygen tank on the players belt, hand, back or "suit-slot". What should happen when a player dons a hardsuit: 1. A helmet hotbar icon appears that toggles the suits helmet on/off. 2. A suit-slot should appear as a separate slot on the users hotbar. This can be used for hooking stuff like tanks, tools and other gadgets. 3. When toggled off, the user should be able to eat, drink, and wear hats like normal. However, a worn hat will prevent the hardsuit helmet from closing when pressed. 4. When toggled on, the suit should draw from an oxygen tank on the players belt, hand, back or "suit-slot". It also should provide another hotbar icon for the flashlight on the hardsuit. The flashlight powercell will be swappable by screwdrivering the hardsuit and removing the cell. I think that hardsuits should be a rarer find in SS14. I also think the order you put a hardsuit on should effect some stuff, like if you put the hardsuit on over your belt, your belt would be inaccessible, but if you put your hardsuit on and then your belt, you'd be able to access it.
1.0
Implement hardsuits properly - <!-- To automatically tag this issue, add the uppercase label(s) surrounded by brackets below, for example: [LABEL] --> ## Description <!-- Explain your issue in detail, including the steps to reproduce it if applicable. Issues without proper explanation are liable to be closed by maintainers.--> Currently hardsuits are two pieces, a helmet and a torso. They are functional in-game currently as they protect against depressurization through PressureProtection.cs. However the hardsuit should be one piece. A torso that when donned puts a "helmet" icon in your hotbar that toggles the helmet of the suit on or off. When the helmet is toggled on, the suit should draw from an oxygen tank on the players belt, hand, back or "suit-slot". What should happen when a player dons a hardsuit: 1. A helmet hotbar icon appears that toggles the suits helmet on/off. 2. A suit-slot should appear as a separate slot on the users hotbar. This can be used for hooking stuff like tanks, tools and other gadgets. 3. When toggled off, the user should be able to eat, drink, and wear hats like normal. However, a worn hat will prevent the hardsuit helmet from closing when pressed. 4. When toggled on, the suit should draw from an oxygen tank on the players belt, hand, back or "suit-slot". It also should provide another hotbar icon for the flashlight on the hardsuit. The flashlight powercell will be swappable by screwdrivering the hardsuit and removing the cell. I think that hardsuits should be a rarer find in SS14. I also think the order you put a hardsuit on should effect some stuff, like if you put the hardsuit on over your belt, your belt would be inaccessible, but if you put your hardsuit on and then your belt, you'd be able to access it.
non_perf
implement hardsuits properly description currently hardsuits are two pieces a helmet and a torso they are functional in game currently as they protect against depressurization through pressureprotection cs however the hardsuit should be one piece a torso that when donned puts a helmet icon in your hotbar that toggles the helmet of the suit on or off when the helmet is toggled on the suit should draw from an oxygen tank on the players belt hand back or suit slot what should happen when a player dons a hardsuit a helmet hotbar icon appears that toggles the suits helmet on off a suit slot should appear as a separate slot on the users hotbar this can be used for hooking stuff like tanks tools and other gadgets when toggled off the user should be able to eat drink and wear hats like normal however a worn hat will prevent the hardsuit helmet from closing when pressed when toggled on the suit should draw from an oxygen tank on the players belt hand back or suit slot it also should provide another hotbar icon for the flashlight on the hardsuit the flashlight powercell will be swappable by screwdrivering the hardsuit and removing the cell i think that hardsuits should be a rarer find in i also think the order you put a hardsuit on should effect some stuff like if you put the hardsuit on over your belt your belt would be inaccessible but if you put your hardsuit on and then your belt you d be able to access it
0
12,064
7,775,270,400
IssuesEvent
2018-06-05 01:48:00
deeplearning4j/deeplearning4j
https://api.github.com/repos/deeplearning4j/deeplearning4j
opened
DL4J: Benchmarks, resnet50: can't run batch size 16, can run batch 32
Bug DL4J Performance
I believe this is related to how CuDNN is configured... this particular model is set to ```ConvolutionLayer.AlgoMode.PREFER_FASTEST```; I suspect it's the CuDNN mode internally that is the reason (i.e., batch size 32 uses a different mode that requires less memory). Now, CuDNN (or at least some of the more recent versions?) does support specifying a maximum workspace size. We may be able to inspect the amount of available memory, and have CuDNN base it's algorithm selection on that. Alternatively, if we detect an OOM, we might be able to enforce use of a less memory-intensive algorithm rather than failing outright.
True
DL4J: Benchmarks, resnet50: can't run batch size 16, can run batch 32 - I believe this is related to how CuDNN is configured... this particular model is set to ```ConvolutionLayer.AlgoMode.PREFER_FASTEST```; I suspect it's the CuDNN mode internally that is the reason (i.e., batch size 32 uses a different mode that requires less memory). Now, CuDNN (or at least some of the more recent versions?) does support specifying a maximum workspace size. We may be able to inspect the amount of available memory, and have CuDNN base it's algorithm selection on that. Alternatively, if we detect an OOM, we might be able to enforce use of a less memory-intensive algorithm rather than failing outright.
perf
benchmarks can t run batch size can run batch i believe this is related to how cudnn is configured this particular model is set to convolutionlayer algomode prefer fastest i suspect it s the cudnn mode internally that is the reason i e batch size uses a different mode that requires less memory now cudnn or at least some of the more recent versions does support specifying a maximum workspace size we may be able to inspect the amount of available memory and have cudnn base it s algorithm selection on that alternatively if we detect an oom we might be able to enforce use of a less memory intensive algorithm rather than failing outright
1
7,323
5,970,677,253
IssuesEvent
2017-05-30 23:29:30
mozilla/thimble.mozilla.org
https://api.github.com/repos/mozilla/thimble.mozilla.org
closed
Publish server - Update knexjs
Performance publish.webmaker.org
We should update knex.js to the latest version as their changelog indicates some perf fixes as well as a switch from `pool2` to `generic-pool` as their connection pooling interface (I don't know if there will be significant implications for this but I assume better connection management). It is a bit of a random fix but is still probably worth doing. cc @cadecairos
True
Publish server - Update knexjs - We should update knex.js to the latest version as their changelog indicates some perf fixes as well as a switch from `pool2` to `generic-pool` as their connection pooling interface (I don't know if there will be significant implications for this but I assume better connection management). It is a bit of a random fix but is still probably worth doing. cc @cadecairos
perf
publish server update knexjs we should update knex js to the latest version as their changelog indicates some perf fixes as well as a switch from to generic pool as their connection pooling interface i don t know if there will be significant implications for this but i assume better connection management it is a bit of a random fix but is still probably worth doing cc cadecairos
1
600,774
18,356,285,077
IssuesEvent
2021-10-08 18:41:18
vtdangg/fa21-cse110-lab3
https://api.github.com/repos/vtdangg/fa21-cse110-lab3
opened
Use CSS Selectors
enhancement high priority collaborate
## What is the purpose fo the new feature or addition? To style the HTML elements from the meeting minutes. ## A clear and concise description of what the addition is and what it does. Each selector to be used will target a different identifier on the HTML element.
1.0
Use CSS Selectors - ## What is the purpose fo the new feature or addition? To style the HTML elements from the meeting minutes. ## A clear and concise description of what the addition is and what it does. Each selector to be used will target a different identifier on the HTML element.
non_perf
use css selectors what is the purpose fo the new feature or addition to style the html elements from the meeting minutes a clear and concise description of what the addition is and what it does each selector to be used will target a different identifier on the html element
0
13,468
8,228,076,532
IssuesEvent
2018-09-07 02:54:55
deeplearning4j/deeplearning4j
https://api.github.com/repos/deeplearning4j/deeplearning4j
opened
DL4J: MLP Profiling
DL4J Performance
ERROR: type should be string, got "\r\nhttps://gist.github.com/AlexDBlack/7fa542887d5e7933fc2c866819d9e1ac\r\n\r\nTracing results: (tracing adds per-method overhead: 5400 ms per epoch average)\r\n\r\n- Updater: 10500ms (41%)\r\n- Dropout, forward pass: 1649ms (6%) - mainly RNG op\r\n- Gemm, forward pass: 1423ms (6%)\r\n- Score calculation (output layer) is 2200ms (9%)\r\n\r\nNote also that for profiling (less accurate for small method calls), updater is 53% of runtime.\r\n\r\nNote that this is AMSGrapd updater, so one of the more complex ones (IIRC it has 3x parameters as state).\r\n\r\nAnyway, I see two main areas for improvement here:\r\n1. Dropout (known issue; there's multiple github issues open about it)\r\n2. Native updaters (mainly for better memory access patterns - iterate over arrays once, rather than N times)\r\n\r\n\r\n[Perf-2018-09-07.zip](https://github.com/deeplearning4j/deeplearning4j/files/2359313/Perf-2018-09-07.zip)\r\n\r\n\r\n"
True
DL4J: MLP Profiling - https://gist.github.com/AlexDBlack/7fa542887d5e7933fc2c866819d9e1ac Tracing results: (tracing adds per-method overhead: 5400 ms per epoch average) - Updater: 10500ms (41%) - Dropout, forward pass: 1649ms (6%) - mainly RNG op - Gemm, forward pass: 1423ms (6%) - Score calculation (output layer) is 2200ms (9%) Note also that for profiling (less accurate for small method calls), updater is 53% of runtime. Note that this is AMSGrapd updater, so one of the more complex ones (IIRC it has 3x parameters as state). Anyway, I see two main areas for improvement here: 1. Dropout (known issue; there's multiple github issues open about it) 2. Native updaters (mainly for better memory access patterns - iterate over arrays once, rather than N times) [Perf-2018-09-07.zip](https://github.com/deeplearning4j/deeplearning4j/files/2359313/Perf-2018-09-07.zip)
perf
mlp profiling tracing results tracing adds per method overhead ms per epoch average updater dropout forward pass mainly rng op gemm forward pass score calculation output layer is note also that for profiling less accurate for small method calls updater is of runtime note that this is amsgrapd updater so one of the more complex ones iirc it has parameters as state anyway i see two main areas for improvement here dropout known issue there s multiple github issues open about it native updaters mainly for better memory access patterns iterate over arrays once rather than n times
1
13,171
8,135,211,436
IssuesEvent
2018-08-20 01:10:03
OctopusDeploy/Issues
https://api.github.com/repos/OctopusDeploy/Issues
closed
As more tasks are queued, starting new tasks takes longer
area/performance
The more tasks that are queued, the longer it takes to start a new task. This starts being noticeable at the 500 queued task mark. Consequently the server has trouble reaching it's task cap if the tasks are relatively short.
True
As more tasks are queued, starting new tasks takes longer - The more tasks that are queued, the longer it takes to start a new task. This starts being noticeable at the 500 queued task mark. Consequently the server has trouble reaching it's task cap if the tasks are relatively short.
perf
as more tasks are queued starting new tasks takes longer the more tasks that are queued the longer it takes to start a new task this starts being noticeable at the queued task mark consequently the server has trouble reaching it s task cap if the tasks are relatively short
1
23,677
12,061,071,834
IssuesEvent
2020-04-15 22:42:30
microsoft/MixedRealityToolkit-Unity
https://api.github.com/repos/microsoft/MixedRealityToolkit-Unity
closed
Add Unity profiler markers to key MRTK code to assist in performance understanding / optimization
Feature Request Performance
MRTK does a lot of work on the behalf of applications. That work has a cost associated. Adding profiler markers can help the MRTK team better optimize critical code paths. This will also allow customers to better understand the costs associated with MRTK and to best optimize their applications. These markers are only active in development builds, per Unity documentation and would take the form of: ```c# private static readonly ProfilerMarker marker = new ProfilerMarker("[MRTK] class.method - optional note"); Method() { using (marker.Auto()) { ... } } ``` This feature should be implemented primarily in inner loop code for each of the core systems and providers. Note: some systems do not have inner loop code. Core Systems - ~~Boundary~~ - No inner loop functionality. - [X] Camera (#7654) - [x] Diagnostics (#7652) - [x] Visual Profiler - [x] Input (#7590) - [x] Unity input controllers - [x] OpenVR - [x] Windows Mixed Reality - [x] Windows Voice - [x] Pointers - [x] Scene (#7658) - [x] Spatial Awareness (#7649, #7654) - [x] Windows Mixed Reality Mesh Observer - [x] Spatial Object Mesh Observer - [x] Teleport (#7653) Extensions (#7661) - [x] Hand Physics - [x] Scene Transition - [x] Tracking Lost Documentation updates (#7671) - [x] Data providers - [x] Input - [x] Spatial awareness - [x] Performance
True
Add Unity profiler markers to key MRTK code to assist in performance understanding / optimization - MRTK does a lot of work on the behalf of applications. That work has a cost associated. Adding profiler markers can help the MRTK team better optimize critical code paths. This will also allow customers to better understand the costs associated with MRTK and to best optimize their applications. These markers are only active in development builds, per Unity documentation and would take the form of: ```c# private static readonly ProfilerMarker marker = new ProfilerMarker("[MRTK] class.method - optional note"); Method() { using (marker.Auto()) { ... } } ``` This feature should be implemented primarily in inner loop code for each of the core systems and providers. Note: some systems do not have inner loop code. Core Systems - ~~Boundary~~ - No inner loop functionality. - [X] Camera (#7654) - [x] Diagnostics (#7652) - [x] Visual Profiler - [x] Input (#7590) - [x] Unity input controllers - [x] OpenVR - [x] Windows Mixed Reality - [x] Windows Voice - [x] Pointers - [x] Scene (#7658) - [x] Spatial Awareness (#7649, #7654) - [x] Windows Mixed Reality Mesh Observer - [x] Spatial Object Mesh Observer - [x] Teleport (#7653) Extensions (#7661) - [x] Hand Physics - [x] Scene Transition - [x] Tracking Lost Documentation updates (#7671) - [x] Data providers - [x] Input - [x] Spatial awareness - [x] Performance
perf
add unity profiler markers to key mrtk code to assist in performance understanding optimization mrtk does a lot of work on the behalf of applications that work has a cost associated adding profiler markers can help the mrtk team better optimize critical code paths this will also allow customers to better understand the costs associated with mrtk and to best optimize their applications these markers are only active in development builds per unity documentation and would take the form of c private static readonly profilermarker marker new profilermarker class method optional note method using marker auto this feature should be implemented primarily in inner loop code for each of the core systems and providers note some systems do not have inner loop code core systems boundary no inner loop functionality camera diagnostics visual profiler input unity input controllers openvr windows mixed reality windows voice pointers scene spatial awareness windows mixed reality mesh observer spatial object mesh observer teleport extensions hand physics scene transition tracking lost documentation updates data providers input spatial awareness performance
1
8,749
2,611,542,789
IssuesEvent
2015-02-27 06:11:31
chrsmith/hedgewars
https://api.github.com/repos/chrsmith/hedgewars
opened
siPointType (LuaAPI: constant for SendStat) doesn’t work when used the first time in game.
auto-migrated Priority-Medium Type-Defect
``` What steps will reproduce the problem? 1. Start one of the new target practice missions 2. Finish it or lose by failing (doesn’t matter) 3. See the ranking. Is it the word “kills” or “points”? 4. Start the mission again 5. Repeat step 3 What is the expected output? What do you see instead? At steps 3 and 5, I want to see the word “points”. At step 3, I actually see the word “kills”, at step 5 “points. That’s weird. :/ What version of the product are you using? On what operating system? r2f19ff0ded73 on GNU/Linux. Please provide any additional information below. I fear there may be other stats-screen related issues. ``` Original issue reported on code.google.com by `almikes@aol.com` on 15 Dec 2014 at 9:11
1.0
siPointType (LuaAPI: constant for SendStat) doesn’t work when used the first time in game. - ``` What steps will reproduce the problem? 1. Start one of the new target practice missions 2. Finish it or lose by failing (doesn’t matter) 3. See the ranking. Is it the word “kills” or “points”? 4. Start the mission again 5. Repeat step 3 What is the expected output? What do you see instead? At steps 3 and 5, I want to see the word “points”. At step 3, I actually see the word “kills”, at step 5 “points. That’s weird. :/ What version of the product are you using? On what operating system? r2f19ff0ded73 on GNU/Linux. Please provide any additional information below. I fear there may be other stats-screen related issues. ``` Original issue reported on code.google.com by `almikes@aol.com` on 15 Dec 2014 at 9:11
non_perf
sipointtype luaapi constant for sendstat doesn’t work when used the first time in game what steps will reproduce the problem start one of the new target practice missions finish it or lose by failing doesn’t matter see the ranking is it the word “kills” or “points” start the mission again repeat step what is the expected output what do you see instead at steps and i want to see the word “points” at step i actually see the word “kills” at step “points that’s weird what version of the product are you using on what operating system on gnu linux please provide any additional information below i fear there may be other stats screen related issues original issue reported on code google com by almikes aol com on dec at
0
23,292
11,902,321,336
IssuesEvent
2020-03-30 13:48:29
scalableminds/webknossos
https://api.github.com/repos/scalableminds/webknossos
closed
The comment tab can be quite slow when there are lots of comments
frontend performance
`getDerivedStateFromProps` is a major performance bottleneck when there are lots of trees with comments. Even clicking on one comment can take up to 7 seconds until something happens. The culprit is most likely the sorting in `getDerivedStateFromProps` which could be easily cached (and maybe optimized in another way, as well).
True
The comment tab can be quite slow when there are lots of comments - `getDerivedStateFromProps` is a major performance bottleneck when there are lots of trees with comments. Even clicking on one comment can take up to 7 seconds until something happens. The culprit is most likely the sorting in `getDerivedStateFromProps` which could be easily cached (and maybe optimized in another way, as well).
perf
the comment tab can be quite slow when there are lots of comments getderivedstatefromprops is a major performance bottleneck when there are lots of trees with comments even clicking on one comment can take up to seconds until something happens the culprit is most likely the sorting in getderivedstatefromprops which could be easily cached and maybe optimized in another way as well
1
78,694
15,051,594,451
IssuesEvent
2021-02-03 14:17:32
joomla/joomla-cms
https://api.github.com/repos/joomla/joomla-cms
closed
[4.0] UnknownAssetException: There is no "bootstrap.dropdown" asset of a "script" type in the registry.
No Code Attached Yet
### Steps to reproduce the issue I updated my copy of J4 from git this morning (past git update was from Jan 26) and now I get this error when trying to access the Administrator. ### Expected result Administrator opens. ### Actual result Get this error. Below is the call stack. Is there a database update that needs to be done for Bootstrap 5 such that a git pull is insufficient to update? Joomla\CMS\WebAsset\Exception\ UnknownAssetException in /var/www/html/libraries/src/WebAsset/WebAssetRegistry.php (line 132) WebAssetRegistry->get() in /var/www/html/libraries/src/WebAsset/WebAssetManager.php (line 257) WebAssetManager->useAsset() in /var/www/html/libraries/src/WebAsset/WebAssetManager.php (line 181) WebAssetManager->__call() in /var/www/html/libraries/src/HTML/Helpers/Bootstrap.php (line 232) Bootstrap::dropdown() in /var/www/html/libraries/src/HTML/HTMLHelper.php (line 322) HTMLHelper::call() in /var/www/html/libraries/src/HTML/HTMLHelper.php (line 154) HTMLHelper::_() in /var/www/html/administrator/modules/mod_user/tmpl/default.php (line 19) require('/var/www/html/administrator/modules/mod_user/tmpl/default.php') in /var/www/html/administrator/modules/mod_user/mod_user.php (line 16) include('/var/www/html/administrator/modules/mod_user/mod_user.php') in /var/www/html/libraries/src/Dispatcher/ModuleDispatcher.php (line 54) ModuleDispatcher::Joomla\CMS\Dispatcher\{closure}() in /var/www/html/libraries/src/Dispatcher/ModuleDispatcher.php (line 57) ModuleDispatcher->dispatch() in /var/www/html/libraries/src/Helper/ModuleHelper.php (line 293) ModuleHelper::renderRawModule() in /var/www/html/libraries/src/Helper/ModuleHelper.php (line 166) ModuleHelper::renderModule() in /var/www/html/libraries/src/Document/Renderer/Html/ModuleRenderer.php (line 97) ModuleRenderer->render() in /var/www/html/libraries/src/Document/Renderer/Html/ModulesRenderer.php (line 48) ModulesRenderer->render() in /var/www/html/libraries/src/Document/HtmlDocument.php (line 589) HtmlDocument->getBuffer() in /var/www/html/libraries/src/Document/HtmlDocument.php (line 895) HtmlDocument->_renderTemplate() in /var/www/html/libraries/src/Document/HtmlDocument.php (line 660) HtmlDocument->render() in /var/www/html/libraries/src/Document/ErrorDocument.php (line 140) ErrorDocument->render() in /var/www/html/libraries/src/Error/Renderer/HtmlRenderer.php (line 76) HtmlRenderer->render() in /var/www/html/libraries/src/Exception/ExceptionHandler.php (line 128) ExceptionHandler::render() in /var/www/html/libraries/src/Exception/ExceptionHandler.php (line 71) ExceptionHandler::handleException() in /var/www/html/libraries/src/Application/CMSApplication.php (line 299) CMSApplication->execute() in /var/www/html/administrator/includes/app.php (line 63) require_once('/var/www/html/administrator/includes/app.php') in /var/www/html/administrator/index.php (line 32) ### System information (as much as possible) ### Additional comments
1.0
[4.0] UnknownAssetException: There is no "bootstrap.dropdown" asset of a "script" type in the registry. - ### Steps to reproduce the issue I updated my copy of J4 from git this morning (past git update was from Jan 26) and now I get this error when trying to access the Administrator. ### Expected result Administrator opens. ### Actual result Get this error. Below is the call stack. Is there a database update that needs to be done for Bootstrap 5 such that a git pull is insufficient to update? Joomla\CMS\WebAsset\Exception\ UnknownAssetException in /var/www/html/libraries/src/WebAsset/WebAssetRegistry.php (line 132) WebAssetRegistry->get() in /var/www/html/libraries/src/WebAsset/WebAssetManager.php (line 257) WebAssetManager->useAsset() in /var/www/html/libraries/src/WebAsset/WebAssetManager.php (line 181) WebAssetManager->__call() in /var/www/html/libraries/src/HTML/Helpers/Bootstrap.php (line 232) Bootstrap::dropdown() in /var/www/html/libraries/src/HTML/HTMLHelper.php (line 322) HTMLHelper::call() in /var/www/html/libraries/src/HTML/HTMLHelper.php (line 154) HTMLHelper::_() in /var/www/html/administrator/modules/mod_user/tmpl/default.php (line 19) require('/var/www/html/administrator/modules/mod_user/tmpl/default.php') in /var/www/html/administrator/modules/mod_user/mod_user.php (line 16) include('/var/www/html/administrator/modules/mod_user/mod_user.php') in /var/www/html/libraries/src/Dispatcher/ModuleDispatcher.php (line 54) ModuleDispatcher::Joomla\CMS\Dispatcher\{closure}() in /var/www/html/libraries/src/Dispatcher/ModuleDispatcher.php (line 57) ModuleDispatcher->dispatch() in /var/www/html/libraries/src/Helper/ModuleHelper.php (line 293) ModuleHelper::renderRawModule() in /var/www/html/libraries/src/Helper/ModuleHelper.php (line 166) ModuleHelper::renderModule() in /var/www/html/libraries/src/Document/Renderer/Html/ModuleRenderer.php (line 97) ModuleRenderer->render() in /var/www/html/libraries/src/Document/Renderer/Html/ModulesRenderer.php (line 48) ModulesRenderer->render() in /var/www/html/libraries/src/Document/HtmlDocument.php (line 589) HtmlDocument->getBuffer() in /var/www/html/libraries/src/Document/HtmlDocument.php (line 895) HtmlDocument->_renderTemplate() in /var/www/html/libraries/src/Document/HtmlDocument.php (line 660) HtmlDocument->render() in /var/www/html/libraries/src/Document/ErrorDocument.php (line 140) ErrorDocument->render() in /var/www/html/libraries/src/Error/Renderer/HtmlRenderer.php (line 76) HtmlRenderer->render() in /var/www/html/libraries/src/Exception/ExceptionHandler.php (line 128) ExceptionHandler::render() in /var/www/html/libraries/src/Exception/ExceptionHandler.php (line 71) ExceptionHandler::handleException() in /var/www/html/libraries/src/Application/CMSApplication.php (line 299) CMSApplication->execute() in /var/www/html/administrator/includes/app.php (line 63) require_once('/var/www/html/administrator/includes/app.php') in /var/www/html/administrator/index.php (line 32) ### System information (as much as possible) ### Additional comments
non_perf
unknownassetexception there is no bootstrap dropdown asset of a script type in the registry steps to reproduce the issue i updated my copy of from git this morning past git update was from jan and now i get this error when trying to access the administrator expected result administrator opens actual result get this error below is the call stack is there a database update that needs to be done for bootstrap such that a git pull is insufficient to update joomla cms webasset exception unknownassetexception in var www html libraries src webasset webassetregistry php line webassetregistry get in var www html libraries src webasset webassetmanager php line webassetmanager useasset in var www html libraries src webasset webassetmanager php line webassetmanager call in var www html libraries src html helpers bootstrap php line bootstrap dropdown in var www html libraries src html htmlhelper php line htmlhelper call in var www html libraries src html htmlhelper php line htmlhelper in var www html administrator modules mod user tmpl default php line require var www html administrator modules mod user tmpl default php in var www html administrator modules mod user mod user php line include var www html administrator modules mod user mod user php in var www html libraries src dispatcher moduledispatcher php line moduledispatcher joomla cms dispatcher closure in var www html libraries src dispatcher moduledispatcher php line moduledispatcher dispatch in var www html libraries src helper modulehelper php line modulehelper renderrawmodule in var www html libraries src helper modulehelper php line modulehelper rendermodule in var www html libraries src document renderer html modulerenderer php line modulerenderer render in var www html libraries src document renderer html modulesrenderer php line modulesrenderer render in var www html libraries src document htmldocument php line htmldocument getbuffer in var www html libraries src document htmldocument php line htmldocument rendertemplate in var www html libraries src document htmldocument php line htmldocument render in var www html libraries src document errordocument php line errordocument render in var www html libraries src error renderer htmlrenderer php line htmlrenderer render in var www html libraries src exception exceptionhandler php line exceptionhandler render in var www html libraries src exception exceptionhandler php line exceptionhandler handleexception in var www html libraries src application cmsapplication php line cmsapplication execute in var www html administrator includes app php line require once var www html administrator includes app php in var www html administrator index php line system information as much as possible additional comments
0
69,574
17,767,908,845
IssuesEvent
2021-08-30 09:55:13
srodrigo/anime-suupu
https://api.github.com/repos/srodrigo/anime-suupu
closed
Upgrade to node 14.17
build
This removes some warnings then doing npm install. ``` #10 1.887 npm WARN EBADENGINE Unsupported engine { #10 1.887 npm WARN EBADENGINE package: '@jest/console@27.0.6', #10 1.887 npm WARN EBADENGINE required: { node: '^10.13.0 || ^12.13.0 || ^14.15.0 || >=15.0.0' }, #10 1.887 npm WARN EBADENGINE current: { node: 'v14.7.0', npm: '7.19.1' } #10 1.887 npm WARN EBADENGINE } ```
1.0
Upgrade to node 14.17 - This removes some warnings then doing npm install. ``` #10 1.887 npm WARN EBADENGINE Unsupported engine { #10 1.887 npm WARN EBADENGINE package: '@jest/console@27.0.6', #10 1.887 npm WARN EBADENGINE required: { node: '^10.13.0 || ^12.13.0 || ^14.15.0 || >=15.0.0' }, #10 1.887 npm WARN EBADENGINE current: { node: 'v14.7.0', npm: '7.19.1' } #10 1.887 npm WARN EBADENGINE } ```
non_perf
upgrade to node this removes some warnings then doing npm install npm warn ebadengine unsupported engine npm warn ebadengine package jest console npm warn ebadengine required node npm warn ebadengine current node npm npm warn ebadengine
0
195,256
22,295,916,439
IssuesEvent
2022-06-13 01:32:21
n-devs/uiWebView
https://api.github.com/repos/n-devs/uiWebView
opened
CVE-2022-25851 (High) detected in jpeg-js-0.3.5.tgz
security vulnerability
## CVE-2022-25851 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jpeg-js-0.3.5.tgz</b></p></summary> <p>A pure javascript JPEG encoder and decoder</p> <p>Library home page: <a href="https://registry.npmjs.org/jpeg-js/-/jpeg-js-0.3.5.tgz">https://registry.npmjs.org/jpeg-js/-/jpeg-js-0.3.5.tgz</a></p> <p>Path to dependency file: /uiWebView/package.json</p> <p>Path to vulnerable library: /node_modules/jpeg-js/package.json</p> <p> Dependency Hierarchy: - get-pixels-3.3.2.tgz (Root Library) - :x: **jpeg-js-0.3.5.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://api.github.com/repos/n-psk/uiWebView/commits/c2829975424625f178515c9822baef2dafbce81c">c2829975424625f178515c9822baef2dafbce81c</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The package jpeg-js before 0.4.4 are vulnerable to Denial of Service (DoS) where a particular piece of input will cause to enter an infinite loop and never return. <p>Publish Date: 2022-06-10 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-25851>CVE-2022-25851</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Release Date: 2022-06-10</p> <p>Fix Resolution: jpeg-js - 0.4.4</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2022-25851 (High) detected in jpeg-js-0.3.5.tgz - ## CVE-2022-25851 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jpeg-js-0.3.5.tgz</b></p></summary> <p>A pure javascript JPEG encoder and decoder</p> <p>Library home page: <a href="https://registry.npmjs.org/jpeg-js/-/jpeg-js-0.3.5.tgz">https://registry.npmjs.org/jpeg-js/-/jpeg-js-0.3.5.tgz</a></p> <p>Path to dependency file: /uiWebView/package.json</p> <p>Path to vulnerable library: /node_modules/jpeg-js/package.json</p> <p> Dependency Hierarchy: - get-pixels-3.3.2.tgz (Root Library) - :x: **jpeg-js-0.3.5.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://api.github.com/repos/n-psk/uiWebView/commits/c2829975424625f178515c9822baef2dafbce81c">c2829975424625f178515c9822baef2dafbce81c</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The package jpeg-js before 0.4.4 are vulnerable to Denial of Service (DoS) where a particular piece of input will cause to enter an infinite loop and never return. <p>Publish Date: 2022-06-10 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-25851>CVE-2022-25851</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Release Date: 2022-06-10</p> <p>Fix Resolution: jpeg-js - 0.4.4</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_perf
cve high detected in jpeg js tgz cve high severity vulnerability vulnerable library jpeg js tgz a pure javascript jpeg encoder and decoder library home page a href path to dependency file uiwebview package json path to vulnerable library node modules jpeg js package json dependency hierarchy get pixels tgz root library x jpeg js tgz vulnerable library found in head commit a href vulnerability details the package jpeg js before are vulnerable to denial of service dos where a particular piece of input will cause to enter an infinite loop and never return publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution jpeg js step up your open source security game with mend
0
54,219
29,868,003,707
IssuesEvent
2023-06-20 06:21:21
eclipse-ee4j/jersey
https://api.github.com/repos/eclipse-ee4j/jersey
closed
Having @QueryParam without @DefaultValue throws expensive IllegalArgumentException for every missing querystring parameter
performance
When performance testing our new Jersey 2.38 application, i noticed that some endpoints were much slower than before (when using Apache Wink). I narrowed it down with Yourkit and noticed that `IllegalArgumentException` is thrown for every querystring parameter that's not present in the request querystring. When adding a `@DefaultValue`, this doesn't occur though. See relevant code here: https://github.com/eclipse-ee4j/jersey/blob/2.38/core-common/src/main/java/org/glassfish/jersey/internal/inject/ParamConverters.java#L63 Throwing exceptions to catch "regular" application flow is generally a bad habit and decreases performance of the application (https://www.baeldung.com/java-exceptions-performance). Is this intended behavior? E.g. when specifying a `@QueryParam` without `@DefaultValue`, does that automatically make it a required querystring parameter? The JAX-RS specification doesn't seem to reflect this, having an optional querystring parameter without a default value seems legit and should just return null. It becomes more noticable when you have endpoints with a lot of optional querystring parameters. More info can be found here https://stackoverflow.com/a/35625547/3032647. I would suggest that the relevant method could also just return null in this case: ``` @Override public T fromString(final String value) { if (value == null) { //throw new IllegalArgumentException(LocalizationMessages.METHOD_PARAMETER_CANNOT_BE_NULL("value")); return null; } try { return _fromString(value); } catch (final InvocationTargetException ex) { // if the value is an empty string, return null if (value.isEmpty()) { return null; } final Throwable cause = ex.getCause(); if (cause instanceof WebApplicationException) { throw (WebApplicationException) cause; } else { throw new ExtractorException(cause); } } catch (final Exception ex) { throw new ProcessingException(ex); } } ``` The same goes for querystring parameters of different types, like int. These throw a NumberFormatException.
True
Having @QueryParam without @DefaultValue throws expensive IllegalArgumentException for every missing querystring parameter - When performance testing our new Jersey 2.38 application, i noticed that some endpoints were much slower than before (when using Apache Wink). I narrowed it down with Yourkit and noticed that `IllegalArgumentException` is thrown for every querystring parameter that's not present in the request querystring. When adding a `@DefaultValue`, this doesn't occur though. See relevant code here: https://github.com/eclipse-ee4j/jersey/blob/2.38/core-common/src/main/java/org/glassfish/jersey/internal/inject/ParamConverters.java#L63 Throwing exceptions to catch "regular" application flow is generally a bad habit and decreases performance of the application (https://www.baeldung.com/java-exceptions-performance). Is this intended behavior? E.g. when specifying a `@QueryParam` without `@DefaultValue`, does that automatically make it a required querystring parameter? The JAX-RS specification doesn't seem to reflect this, having an optional querystring parameter without a default value seems legit and should just return null. It becomes more noticable when you have endpoints with a lot of optional querystring parameters. More info can be found here https://stackoverflow.com/a/35625547/3032647. I would suggest that the relevant method could also just return null in this case: ``` @Override public T fromString(final String value) { if (value == null) { //throw new IllegalArgumentException(LocalizationMessages.METHOD_PARAMETER_CANNOT_BE_NULL("value")); return null; } try { return _fromString(value); } catch (final InvocationTargetException ex) { // if the value is an empty string, return null if (value.isEmpty()) { return null; } final Throwable cause = ex.getCause(); if (cause instanceof WebApplicationException) { throw (WebApplicationException) cause; } else { throw new ExtractorException(cause); } } catch (final Exception ex) { throw new ProcessingException(ex); } } ``` The same goes for querystring parameters of different types, like int. These throw a NumberFormatException.
perf
having queryparam without defaultvalue throws expensive illegalargumentexception for every missing querystring parameter when performance testing our new jersey application i noticed that some endpoints were much slower than before when using apache wink i narrowed it down with yourkit and noticed that illegalargumentexception is thrown for every querystring parameter that s not present in the request querystring when adding a defaultvalue this doesn t occur though see relevant code here throwing exceptions to catch regular application flow is generally a bad habit and decreases performance of the application is this intended behavior e g when specifying a queryparam without defaultvalue does that automatically make it a required querystring parameter the jax rs specification doesn t seem to reflect this having an optional querystring parameter without a default value seems legit and should just return null it becomes more noticable when you have endpoints with a lot of optional querystring parameters more info can be found here i would suggest that the relevant method could also just return null in this case override public t fromstring final string value if value null throw new illegalargumentexception localizationmessages method parameter cannot be null value return null try return fromstring value catch final invocationtargetexception ex if the value is an empty string return null if value isempty return null final throwable cause ex getcause if cause instanceof webapplicationexception throw webapplicationexception cause else throw new extractorexception cause catch final exception ex throw new processingexception ex the same goes for querystring parameters of different types like int these throw a numberformatexception
1
6,545
5,514,507,125
IssuesEvent
2017-03-17 15:17:35
catapult-project/catapult
https://api.github.com/repos/catapult-project/catapult
opened
Dashboard - Investigate memcache calls in stored_object.
Hotlist:Perf Dashboard Performance Perf Dashboard
NDB is supposed to managed memcache for you, unless you explicitly change the functionality so the extra memcache calls in here seem redundant. Possibly remove them?
True
Dashboard - Investigate memcache calls in stored_object. - NDB is supposed to managed memcache for you, unless you explicitly change the functionality so the extra memcache calls in here seem redundant. Possibly remove them?
perf
dashboard investigate memcache calls in stored object ndb is supposed to managed memcache for you unless you explicitly change the functionality so the extra memcache calls in here seem redundant possibly remove them
1
403,733
27,432,327,275
IssuesEvent
2023-03-02 03:02:25
amishpatel0423/Ticket_tracker
https://api.github.com/repos/amishpatel0423/Ticket_tracker
opened
Account creation, login and logout
documentation
- Research and implement user authentication and authorization using a library in node.js. - Create routes for handling account creation and login. - Test the account creation and login functionality using Mocha - Create login sign-up pages using bootstrap.
1.0
Account creation, login and logout - - Research and implement user authentication and authorization using a library in node.js. - Create routes for handling account creation and login. - Test the account creation and login functionality using Mocha - Create login sign-up pages using bootstrap.
non_perf
account creation login and logout research and implement user authentication and authorization using a library in node js create routes for handling account creation and login test the account creation and login functionality using mocha create login sign up pages using bootstrap
0
20,162
10,616,409,247
IssuesEvent
2019-10-12 11:30:03
coq/coq
https://api.github.com/repos/coq/coq
closed
master uses 1.8x+ the memory as 8.9.1
kind: performance kind: regression part: vernac
<!-- Thank you for reporting a bug to Coq! --> #### Description of the problem Coq master seems to use almost twice as much memory as 8.9.1 when processing `Kami/CompileVerifiable.v`. This means we cannot run coq master in our CI, and thus cannot add kami to coq CI (#10306, and afaik this is the only remaining blocker for that). build [log](https://builds.sr.ht/~andres_tries_srht_github/job/83001) and [script](https://builds.sr.ht/api/jobs/83001/manifest) (failing after 2GB RAM runs out) I confirmed this locally, seeing 281178 minor page faults from 8.9.1 and 530762 from master, putting it just slightly over 2GB. ``` git clone --recursive git@github.com:sifive/RiscvSpecFormal.git cd RiscvSpecFormal make /usr/bin/time -v coqc -q -Q coq-record-update/src RecordUpdate -Q bbv/theories bbv -Q Kami Kami -Q FpuKami FpuKami -Q ProcKami ProcKami -Q StdLibKami StdLibKami Kami/CompileVerifiable.v ``` @tjmach @vmurali #### Coq Version e9c42c26d1fc653d1411fa2fe41b12bffa8ae992 vs 8.9.1
True
master uses 1.8x+ the memory as 8.9.1 - <!-- Thank you for reporting a bug to Coq! --> #### Description of the problem Coq master seems to use almost twice as much memory as 8.9.1 when processing `Kami/CompileVerifiable.v`. This means we cannot run coq master in our CI, and thus cannot add kami to coq CI (#10306, and afaik this is the only remaining blocker for that). build [log](https://builds.sr.ht/~andres_tries_srht_github/job/83001) and [script](https://builds.sr.ht/api/jobs/83001/manifest) (failing after 2GB RAM runs out) I confirmed this locally, seeing 281178 minor page faults from 8.9.1 and 530762 from master, putting it just slightly over 2GB. ``` git clone --recursive git@github.com:sifive/RiscvSpecFormal.git cd RiscvSpecFormal make /usr/bin/time -v coqc -q -Q coq-record-update/src RecordUpdate -Q bbv/theories bbv -Q Kami Kami -Q FpuKami FpuKami -Q ProcKami ProcKami -Q StdLibKami StdLibKami Kami/CompileVerifiable.v ``` @tjmach @vmurali #### Coq Version e9c42c26d1fc653d1411fa2fe41b12bffa8ae992 vs 8.9.1
perf
master uses the memory as description of the problem coq master seems to use almost twice as much memory as when processing kami compileverifiable v this means we cannot run coq master in our ci and thus cannot add kami to coq ci and afaik this is the only remaining blocker for that build and failing after ram runs out i confirmed this locally seeing minor page faults from and from master putting it just slightly over git clone recursive git github com sifive riscvspecformal git cd riscvspecformal make usr bin time v coqc q q coq record update src recordupdate q bbv theories bbv q kami kami q fpukami fpukami q prockami prockami q stdlibkami stdlibkami kami compileverifiable v tjmach vmurali coq version vs
1
54,213
29,865,514,488
IssuesEvent
2023-06-20 03:13:34
NuGet/Home
https://api.github.com/repos/NuGet/Home
opened
GetContentFileFolderRelativeToFramework allocates too much
Type:Bug Tenet:Performance
GetContentFileFolderRelativeToFramework allocates too many strings, enumerators and arrays. It can be made to allocate only the output string.
True
GetContentFileFolderRelativeToFramework allocates too much - GetContentFileFolderRelativeToFramework allocates too many strings, enumerators and arrays. It can be made to allocate only the output string.
perf
getcontentfilefolderrelativetoframework allocates too much getcontentfilefolderrelativetoframework allocates too many strings enumerators and arrays it can be made to allocate only the output string
1
23,844
16,618,506,307
IssuesEvent
2021-06-02 20:10:30
yt-project/yt
https://api.github.com/repos/yt-project/yt
closed
CI: failures on the main branch (?)
blocker infrastructure tests: running tests
### Bug report **Bug summary** Jenkins CI is apparently broken as some 44 failures are showing up on different PRs (#3234, #3236, #3270, #3295, to name a few I could identify) see for instance : PR #3295 https://tests.yt-project.org/job/yt_py38_git/3236/#showFailuresLink The errors seem to be related to incorrect field access, though it's not clear to me what change may have caused this. @Xarthisius, do you know if this could be a result of changes on the server, rather than on the main branch of the repo ?
1.0
CI: failures on the main branch (?) - ### Bug report **Bug summary** Jenkins CI is apparently broken as some 44 failures are showing up on different PRs (#3234, #3236, #3270, #3295, to name a few I could identify) see for instance : PR #3295 https://tests.yt-project.org/job/yt_py38_git/3236/#showFailuresLink The errors seem to be related to incorrect field access, though it's not clear to me what change may have caused this. @Xarthisius, do you know if this could be a result of changes on the server, rather than on the main branch of the repo ?
non_perf
ci failures on the main branch bug report bug summary jenkins ci is apparently broken as some failures are showing up on different prs to name a few i could identify see for instance pr the errors seem to be related to incorrect field access though it s not clear to me what change may have caused this xarthisius do you know if this could be a result of changes on the server rather than on the main branch of the repo
0
13,076
8,101,225,624
IssuesEvent
2018-08-12 11:06:18
phpstan/phpstan
https://api.github.com/repos/phpstan/phpstan
closed
phpstan becomes slow in 0.10.x
performance
### Summary of a problem or a feature request Exuding the file ([libraries/classes/Controllers/Server/ServerVariablesController.php](https://github.com/phpmyadmin/phpmyadmin/blob/2334119d8a6f25f40fde966fb6252ffe169d3bd6/libraries/classes/Controllers/Server/ServerVariablesController.php#L392)), the analysis takes 5 minutes. More than 45 minutes if not excluded (I had to stop the process because it is unacceptable). Using command : ```bash starttime=$(date +"%s") ; ./vendor/bin/phpstan analyse ./ --configuration=phpstan.neon --level=1 --memory-limit=1G --debug ; echo seconds=$(($(date +"%s")-$starttime)) ``` https://github.com/phpmyadmin/phpmyadmin/blob/master/libraries/classes/Relation.php took 6 min. 6x more on dev-master than with 0.10.1 https://github.com/phpmyadmin/phpmyadmin/blob/master/libraries/classes/Tracking.php hangs for 1 min and 10 seconds approximately. https://github.com/phpmyadmin/phpmyadmin/blob/2334119d8a6f25f40fde966fb6252ffe169d3bd6/libraries/classes/Controllers/Server/ServerVariablesController.php#L392 took forever ... ### Expected Run as fast as 0.9.x ### Actual Takes forever or take more than 10 minutes (TravisCI stops after 10 minutes running)
True
phpstan becomes slow in 0.10.x - ### Summary of a problem or a feature request Exuding the file ([libraries/classes/Controllers/Server/ServerVariablesController.php](https://github.com/phpmyadmin/phpmyadmin/blob/2334119d8a6f25f40fde966fb6252ffe169d3bd6/libraries/classes/Controllers/Server/ServerVariablesController.php#L392)), the analysis takes 5 minutes. More than 45 minutes if not excluded (I had to stop the process because it is unacceptable). Using command : ```bash starttime=$(date +"%s") ; ./vendor/bin/phpstan analyse ./ --configuration=phpstan.neon --level=1 --memory-limit=1G --debug ; echo seconds=$(($(date +"%s")-$starttime)) ``` https://github.com/phpmyadmin/phpmyadmin/blob/master/libraries/classes/Relation.php took 6 min. 6x more on dev-master than with 0.10.1 https://github.com/phpmyadmin/phpmyadmin/blob/master/libraries/classes/Tracking.php hangs for 1 min and 10 seconds approximately. https://github.com/phpmyadmin/phpmyadmin/blob/2334119d8a6f25f40fde966fb6252ffe169d3bd6/libraries/classes/Controllers/Server/ServerVariablesController.php#L392 took forever ... ### Expected Run as fast as 0.9.x ### Actual Takes forever or take more than 10 minutes (TravisCI stops after 10 minutes running)
perf
phpstan becomes slow in x summary of a problem or a feature request exuding the file the analysis takes minutes more than minutes if not excluded i had to stop the process because it is unacceptable using command bash starttime date s vendor bin phpstan analyse configuration phpstan neon level memory limit debug echo seconds date s starttime took min more on dev master than with hangs for min and seconds approximately took forever expected run as fast as x actual takes forever or take more than minutes travisci stops after minutes running
1
25,705
12,709,500,556
IssuesEvent
2020-06-23 12:28:07
unoplatform/uno
https://api.github.com/repos/unoplatform/uno
closed
Layouts with lots of buttons are taking too much time to load
area/wasm kind/bug kind/performance triage/needs-information
## Current behavior A page that have a great number of control takes too much to load. ## Expected behavior The page should take the same amount of time to load as in the UWP solution. ## How to reproduce it (as minimally and precisely as possible) Repo link: https://github.com/JhonH3avy/ControlLoadingPerformance.git ## Environment Nuget Package: - Uno.UI: 1.46.199-dev2445 Affected platform(s): - [ ] iOS - [ ] Android - [x] WebAssembly - [ ] Windows - [ ] Build tasks Visual Studio - [ ] 2017 (version: ) - [x] 2019 (version:16.2.1) - [ ] for Mac (version: ) Relevant plugins - [x] Resharper (version:183.0) ## Anything else we need to know? The repo has four pages that have different amounts of controls, the more controls the page have the time to load the page grows apparently linearly.
True
Layouts with lots of buttons are taking too much time to load - ## Current behavior A page that have a great number of control takes too much to load. ## Expected behavior The page should take the same amount of time to load as in the UWP solution. ## How to reproduce it (as minimally and precisely as possible) Repo link: https://github.com/JhonH3avy/ControlLoadingPerformance.git ## Environment Nuget Package: - Uno.UI: 1.46.199-dev2445 Affected platform(s): - [ ] iOS - [ ] Android - [x] WebAssembly - [ ] Windows - [ ] Build tasks Visual Studio - [ ] 2017 (version: ) - [x] 2019 (version:16.2.1) - [ ] for Mac (version: ) Relevant plugins - [x] Resharper (version:183.0) ## Anything else we need to know? The repo has four pages that have different amounts of controls, the more controls the page have the time to load the page grows apparently linearly.
perf
layouts with lots of buttons are taking too much time to load current behavior a page that have a great number of control takes too much to load expected behavior the page should take the same amount of time to load as in the uwp solution how to reproduce it as minimally and precisely as possible repo link environment nuget package uno ui affected platform s ios android webassembly windows build tasks visual studio version version for mac version relevant plugins resharper version anything else we need to know the repo has four pages that have different amounts of controls the more controls the page have the time to load the page grows apparently linearly
1
36,014
17,391,977,748
IssuesEvent
2021-08-02 08:35:55
NVIDIA/TensorRT
https://api.github.com/repos/NVIDIA/TensorRT
closed
converted bart model is slower than the original one during inference time
Component: ONNX Runtime: Performance triaged
hi there, I have a project to use facebook bart for news summerization. In order to make the inference faster, we are trying to convert part of the model to tensorrt and then interegerated into the original model. Via this repo, I have successfully converted facebook bart decoder layers to tensorrt model, and successfully integerated, however, the total inference time of generated tokens of the new bart model(i.e. the model integerated with converted tensorrt decoder layer) is 2 times slower than the original one, so, I tried to find why, and finally I found that the new bart model itself is faster than the original one, see code below, line1 is faster than before after changing with new bart model, but is became much slower after line2, line1: outputs = self(model_inputs, return_dict=True) line2: next_token_logits = outputs.logits[:, -1, :] line3: next_token_logits = self.adjust_logits_during_generation( line4: next_token_logits, cur_len=cur_len, max_length=max_length) below you can find the comparing speed of new bart model and original one (corresponding to comparing results of code line1 above), <img width="292" alt="1" src="https://user-images.githubusercontent.com/13851442/106444599-edb8bf80-64b8-11eb-8a55-a76e342b1447.PNG"> below you can find the comparing speed of new bart model and original one(corresponding to comparing results of code after line2 above) <img width="353" alt="2" src="https://user-images.githubusercontent.com/13851442/106444940-615acc80-64b9-11eb-914b-fc6df0f849ee.PNG"> Does anyone knows why it became slow after line1 code above?
True
converted bart model is slower than the original one during inference time - hi there, I have a project to use facebook bart for news summerization. In order to make the inference faster, we are trying to convert part of the model to tensorrt and then interegerated into the original model. Via this repo, I have successfully converted facebook bart decoder layers to tensorrt model, and successfully integerated, however, the total inference time of generated tokens of the new bart model(i.e. the model integerated with converted tensorrt decoder layer) is 2 times slower than the original one, so, I tried to find why, and finally I found that the new bart model itself is faster than the original one, see code below, line1 is faster than before after changing with new bart model, but is became much slower after line2, line1: outputs = self(model_inputs, return_dict=True) line2: next_token_logits = outputs.logits[:, -1, :] line3: next_token_logits = self.adjust_logits_during_generation( line4: next_token_logits, cur_len=cur_len, max_length=max_length) below you can find the comparing speed of new bart model and original one (corresponding to comparing results of code line1 above), <img width="292" alt="1" src="https://user-images.githubusercontent.com/13851442/106444599-edb8bf80-64b8-11eb-8a55-a76e342b1447.PNG"> below you can find the comparing speed of new bart model and original one(corresponding to comparing results of code after line2 above) <img width="353" alt="2" src="https://user-images.githubusercontent.com/13851442/106444940-615acc80-64b9-11eb-914b-fc6df0f849ee.PNG"> Does anyone knows why it became slow after line1 code above?
perf
converted bart model is slower than the original one during inference time hi there i have a project to use facebook bart for news summerization in order to make the inference faster we are trying to convert part of the model to tensorrt and then interegerated into the original model via this repo i have successfully converted facebook bart decoder layers to tensorrt model and successfully integerated however the total inference time of generated tokens of the new bart model i e the model integerated with converted tensorrt decoder layer is times slower than the original one so i tried to find why and finally i found that the new bart model itself is faster than the original one see code below is faster than before after changing with new bart model but is became much slower after outputs self model inputs return dict true next token logits outputs logits next token logits self adjust logits during generation next token logits cur len cur len max length max length below you can find the comparing speed of new bart model and original one corresponding to comparing results of code above img width alt src below you can find the comparing speed of new bart model and original one corresponding to comparing results of code after above img width alt src does anyone knows why it became slow after code above
1
48,249
25,451,172,388
IssuesEvent
2022-11-24 10:33:42
topling/toplingdb
https://api.github.com/repos/topling/toplingdb
closed
FindFileInRange: devirtualize comparator and add prefix cache
performance
## This was a PR to upstream https://github.com/facebook/rocksdb/pull/10646 ## Copied from https://github.com/facebook/rocksdb/pull/10646 This PR is based on https://github.com/facebook/rocksdb/pull/10645. If comparator is BytewiseComparator or ReverseBytewiseComparator: devirtualize comparator: specialize the impl by direct call memcmp add prefix cache: narrow the search range by prefix cache, then find by comparator ## Relevant commits 3304b5b5d586f3abbd80a65eed700b543c461c2c 0f98a93ebb4c968b56fbc300cd1bae5d35bc54ce 9ae79bdea0d6af9c63126a53441167aee76b2301 271a43d6a255bec75fbe0973bfa87373ee165be7 0ae017a2c491b6093c23a8ea9d3e09c51e33e169
True
FindFileInRange: devirtualize comparator and add prefix cache - ## This was a PR to upstream https://github.com/facebook/rocksdb/pull/10646 ## Copied from https://github.com/facebook/rocksdb/pull/10646 This PR is based on https://github.com/facebook/rocksdb/pull/10645. If comparator is BytewiseComparator or ReverseBytewiseComparator: devirtualize comparator: specialize the impl by direct call memcmp add prefix cache: narrow the search range by prefix cache, then find by comparator ## Relevant commits 3304b5b5d586f3abbd80a65eed700b543c461c2c 0f98a93ebb4c968b56fbc300cd1bae5d35bc54ce 9ae79bdea0d6af9c63126a53441167aee76b2301 271a43d6a255bec75fbe0973bfa87373ee165be7 0ae017a2c491b6093c23a8ea9d3e09c51e33e169
perf
findfileinrange devirtualize comparator and add prefix cache this was a pr to upstream copied from this pr is based on if comparator is bytewisecomparator or reversebytewisecomparator devirtualize comparator specialize the impl by direct call memcmp add prefix cache narrow the search range by prefix cache then find by comparator relevant commits
1
41,349
21,647,405,864
IssuesEvent
2022-05-06 04:51:45
JuliaData/TypedTables.jl
https://api.github.com/repos/JuliaData/TypedTables.jl
closed
How to deal with latency with large number of columns?
performance
Currently I have a column type that is lazy. It represents ~GB of stuff that needs to be read and decompressed on the fly and cached (by chunk). Turns out I can construct `Table` nicely and the laziness works. However, sometimes we have 1000+ columns, in this case the compiler struggles a lot. Is it possible to have a less-typed but same interfaced `Table`?
True
How to deal with latency with large number of columns? - Currently I have a column type that is lazy. It represents ~GB of stuff that needs to be read and decompressed on the fly and cached (by chunk). Turns out I can construct `Table` nicely and the laziness works. However, sometimes we have 1000+ columns, in this case the compiler struggles a lot. Is it possible to have a less-typed but same interfaced `Table`?
perf
how to deal with latency with large number of columns currently i have a column type that is lazy it represents gb of stuff that needs to be read and decompressed on the fly and cached by chunk turns out i can construct table nicely and the laziness works however sometimes we have columns in this case the compiler struggles a lot is it possible to have a less typed but same interfaced table
1
18,051
9,986,346,257
IssuesEvent
2019-07-10 18:52:05
tensorflow/tensorflow
https://api.github.com/repos/tensorflow/tensorflow
closed
TF2 - apparent memory leak when running dataset ops eagerly
2.0.0-beta0 comp:data type:bug/performance
**System information** - Have I written custom code: yes - OS Platform and Distribution (e.g., Linux Ubuntu 16.04): OSX - TensorFlow installed from (source or binary): 2.0.0beta - TensorFlow version (use command below): v1.12.1-3259-gf59745a381 2.0.0-beta0 - Python version: 3.6.8 **Describe the current behavior** When using the function `tf.autograph.to_graph`, I see a memory leak which I don't see if I use the annotation `@tf.function` **Describe the expected behavior** There should not be a memory leak. **Code to reproduce the issue** ```python import os import psutil import numpy as np import tensorflow as tf process = psutil.Process(os.getpid()) # @tf.function def train_epoch(model, p_data): for real_inputs in p_data: model * real_inputs train_epoch = tf.autograph.to_graph(train_epoch) data = np.random.normal(0., 1., [10000, 2]) p_data = tf.data.Dataset.from_tensor_slices(data).batch(32) model = tf.Variable([1., 1.], dtype=tf.float64) for i in range(5000): train_epoch(model, p_data) if i % 50 == 0: print(process.memory_info().rss) ```
True
TF2 - apparent memory leak when running dataset ops eagerly - **System information** - Have I written custom code: yes - OS Platform and Distribution (e.g., Linux Ubuntu 16.04): OSX - TensorFlow installed from (source or binary): 2.0.0beta - TensorFlow version (use command below): v1.12.1-3259-gf59745a381 2.0.0-beta0 - Python version: 3.6.8 **Describe the current behavior** When using the function `tf.autograph.to_graph`, I see a memory leak which I don't see if I use the annotation `@tf.function` **Describe the expected behavior** There should not be a memory leak. **Code to reproduce the issue** ```python import os import psutil import numpy as np import tensorflow as tf process = psutil.Process(os.getpid()) # @tf.function def train_epoch(model, p_data): for real_inputs in p_data: model * real_inputs train_epoch = tf.autograph.to_graph(train_epoch) data = np.random.normal(0., 1., [10000, 2]) p_data = tf.data.Dataset.from_tensor_slices(data).batch(32) model = tf.Variable([1., 1.], dtype=tf.float64) for i in range(5000): train_epoch(model, p_data) if i % 50 == 0: print(process.memory_info().rss) ```
perf
apparent memory leak when running dataset ops eagerly system information have i written custom code yes os platform and distribution e g linux ubuntu osx tensorflow installed from source or binary tensorflow version use command below python version describe the current behavior when using the function tf autograph to graph i see a memory leak which i don t see if i use the annotation tf function describe the expected behavior there should not be a memory leak code to reproduce the issue python import os import psutil import numpy as np import tensorflow as tf process psutil process os getpid tf function def train epoch model p data for real inputs in p data model real inputs train epoch tf autograph to graph train epoch data np random normal p data tf data dataset from tensor slices data batch model tf variable dtype tf for i in range train epoch model p data if i print process memory info rss
1
18,575
13,046,744,677
IssuesEvent
2020-07-29 09:30:21
OpenRA/OpenRA
https://api.github.com/repos/OpenRA/OpenRA
closed
macOS: suppress shortcut cmd+Q
Idea/Wishlist OS: MacOS X Usability
The default key combination for terminating applications under macOS is cmd+Q. In OpenRA, the key combination cmd+<number key> combines units into a group. This can lead to a premature end of the game. In the hotkey settings the key combination for cmd+Q can be assigned, but it will always terminate the application. The key combination should be intercepted to prevent the application from being terminated without asking. Alternatively, a dialog should be displayed asking if you really want to quit the application.
True
macOS: suppress shortcut cmd+Q - The default key combination for terminating applications under macOS is cmd+Q. In OpenRA, the key combination cmd+<number key> combines units into a group. This can lead to a premature end of the game. In the hotkey settings the key combination for cmd+Q can be assigned, but it will always terminate the application. The key combination should be intercepted to prevent the application from being terminated without asking. Alternatively, a dialog should be displayed asking if you really want to quit the application.
non_perf
macos suppress shortcut cmd q the default key combination for terminating applications under macos is cmd q in openra the key combination cmd combines units into a group this can lead to a premature end of the game in the hotkey settings the key combination for cmd q can be assigned but it will always terminate the application the key combination should be intercepted to prevent the application from being terminated without asking alternatively a dialog should be displayed asking if you really want to quit the application
0
77,195
3,506,270,780
IssuesEvent
2016-01-08 05:10:03
OregonCore/OregonCore
https://api.github.com/repos/OregonCore/OregonCore
closed
Talent's Bug Important ! (BB #257)
duplicate migrated Priority: Medium Type: Bug
This issue was migrated from bitbucket. **Original Reporter:** **Original Date:** 07.08.2010 05:12:57 GMT+0000 **Original Priority:** major **Original Type:** bug **Original State:** duplicate **Direct Link:** https://bitbucket.org/oregon/oregoncore/issues/257 <hr> when Get talent point for example 0/3, 1/3 , 2/3 , 3/3 here if Again hit the talent buttom Can Get 1/3 , 2/3 , 3/3 This is a big Problem it is Not God For this project !
1.0
Talent's Bug Important ! (BB #257) - This issue was migrated from bitbucket. **Original Reporter:** **Original Date:** 07.08.2010 05:12:57 GMT+0000 **Original Priority:** major **Original Type:** bug **Original State:** duplicate **Direct Link:** https://bitbucket.org/oregon/oregoncore/issues/257 <hr> when Get talent point for example 0/3, 1/3 , 2/3 , 3/3 here if Again hit the talent buttom Can Get 1/3 , 2/3 , 3/3 This is a big Problem it is Not God For this project !
non_perf
talent s bug important bb this issue was migrated from bitbucket original reporter original date gmt original priority major original type bug original state duplicate direct link when get talent point for example here if again hit the talent buttom can get this is a big problem it is not god for this project
0
102,656
12,814,349,540
IssuesEvent
2020-07-04 18:13:13
magento/adobe-stock-integration
https://api.github.com/repos/magento/adobe-stock-integration
closed
[Spike] Investigate meta data extracting
Priority: P1 Progress: PR created requires technical design
As a Merchant I want Magento Media Storage to parse image meta data that is available in the file and expose it on image details page so that I can manage it and use for filtering images **Additional context** https://github.com/magento/adobe-stock-integration/issues/1183 # Open questions: - is there any required set of fields that exist in image meta data? Like title, description, keywords. Or any fields can be removed/added to the file according to standard? - is there a way in a meta data file to define what fields can be edited by user in the application? e.g. Camera data, image format, creation and modification date can not be edited
1.0
[Spike] Investigate meta data extracting - As a Merchant I want Magento Media Storage to parse image meta data that is available in the file and expose it on image details page so that I can manage it and use for filtering images **Additional context** https://github.com/magento/adobe-stock-integration/issues/1183 # Open questions: - is there any required set of fields that exist in image meta data? Like title, description, keywords. Or any fields can be removed/added to the file according to standard? - is there a way in a meta data file to define what fields can be edited by user in the application? e.g. Camera data, image format, creation and modification date can not be edited
non_perf
investigate meta data extracting as a merchant i want magento media storage to parse image meta data that is available in the file and expose it on image details page so that i can manage it and use for filtering images additional context open questions is there any required set of fields that exist in image meta data like title description keywords or any fields can be removed added to the file according to standard is there a way in a meta data file to define what fields can be edited by user in the application e g camera data image format creation and modification date can not be edited
0
6,579
5,533,877,465
IssuesEvent
2017-03-21 14:19:49
wojtpl2/ExtendedXmlSerializer
https://api.github.com/repos/wojtpl2/ExtendedXmlSerializer
opened
Performance Issues
help wanted performance
Well, double whammy. While implementing the feature for #56, I found out two issues. 1. We have a performance creep between 4f015846a5fb899eb5865e432b095722b273d0c1 and the current commit (b61e39944009dc07fa9124e7de7ad2df6179ae92). Here is the performance from the first: ``` Method | Mean | StdDev | ---------------------------------- |----------- |---------- | SerializationClassWithPrimitive | 34.9980 us | 0.0149 us | DeserializationClassWithPrimitive | 44.8417 us | 0.0299 us | ``` And here it is currently: ``` Method | Mean | StdDev | ---------------------------------- |----------- |---------- | SerializationClassWithPrimitive | 37.0960 us | 0.0456 us | DeserializationClassWithPrimitive | 48.5083 us | 0.0949 us | ``` I took some time to see where it could be taking place, but could not see anything obvious. I did find one area that I fixed, but it is still too slow. To be honest, I am a little burnt out on fixing the performance, so I am definitely open to any assistance here. It has easily consumed 40% of my time on this project, if not more. Secondly -- and probably worse -- it appears that the way in which we were testing the original `XmlSerializer` was not accurate. I have updated the tests so [that they are doing the same thing](https://github.com/wojtpl2/ExtendedXmlSerializer/blob/v2.0.0/test/ExtendedXmlSerializer.Tests.Performance/Benchmarks.cs#L112-L151), and here is the new results: ``` Method | Mean | StdDev | ---------------------------------- |----------- |---------- | SerializationClassWithPrimitive | 40.8919 us | 0.2061 us | DeserializationClassWithPrimitive | 57.7255 us | 0.0529 us | ``` This is from ~62/60 on my machine. So, a considerable jump. Just so you know, I start a new work project on April 3rd, so I will not be able to help out here much after that. I hope to have all the outstanding issues complete by then. Although I am not so sure about this one. If you want to help out and look at this issue, please feel free to do so.
True
Performance Issues - Well, double whammy. While implementing the feature for #56, I found out two issues. 1. We have a performance creep between 4f015846a5fb899eb5865e432b095722b273d0c1 and the current commit (b61e39944009dc07fa9124e7de7ad2df6179ae92). Here is the performance from the first: ``` Method | Mean | StdDev | ---------------------------------- |----------- |---------- | SerializationClassWithPrimitive | 34.9980 us | 0.0149 us | DeserializationClassWithPrimitive | 44.8417 us | 0.0299 us | ``` And here it is currently: ``` Method | Mean | StdDev | ---------------------------------- |----------- |---------- | SerializationClassWithPrimitive | 37.0960 us | 0.0456 us | DeserializationClassWithPrimitive | 48.5083 us | 0.0949 us | ``` I took some time to see where it could be taking place, but could not see anything obvious. I did find one area that I fixed, but it is still too slow. To be honest, I am a little burnt out on fixing the performance, so I am definitely open to any assistance here. It has easily consumed 40% of my time on this project, if not more. Secondly -- and probably worse -- it appears that the way in which we were testing the original `XmlSerializer` was not accurate. I have updated the tests so [that they are doing the same thing](https://github.com/wojtpl2/ExtendedXmlSerializer/blob/v2.0.0/test/ExtendedXmlSerializer.Tests.Performance/Benchmarks.cs#L112-L151), and here is the new results: ``` Method | Mean | StdDev | ---------------------------------- |----------- |---------- | SerializationClassWithPrimitive | 40.8919 us | 0.2061 us | DeserializationClassWithPrimitive | 57.7255 us | 0.0529 us | ``` This is from ~62/60 on my machine. So, a considerable jump. Just so you know, I start a new work project on April 3rd, so I will not be able to help out here much after that. I hope to have all the outstanding issues complete by then. Although I am not so sure about this one. If you want to help out and look at this issue, please feel free to do so.
perf
performance issues well double whammy while implementing the feature for i found out two issues we have a performance creep between and the current commit here is the performance from the first method mean stddev serializationclasswithprimitive us us deserializationclasswithprimitive us us and here it is currently method mean stddev serializationclasswithprimitive us us deserializationclasswithprimitive us us i took some time to see where it could be taking place but could not see anything obvious i did find one area that i fixed but it is still too slow to be honest i am a little burnt out on fixing the performance so i am definitely open to any assistance here it has easily consumed of my time on this project if not more secondly and probably worse it appears that the way in which we were testing the original xmlserializer was not accurate i have updated the tests so and here is the new results method mean stddev serializationclasswithprimitive us us deserializationclasswithprimitive us us this is from on my machine so a considerable jump just so you know i start a new work project on april so i will not be able to help out here much after that i hope to have all the outstanding issues complete by then although i am not so sure about this one if you want to help out and look at this issue please feel free to do so
1
44,315
23,551,478,880
IssuesEvent
2022-08-21 22:10:03
neovim/neovim
https://api.github.com/repos/neovim/neovim
closed
Expanding wildcards in directories with several self-referential links is very, very slow
performance bug-vim filesystem
### Neovim version (nvim -v) 0.6.1 ### Vim (not Nvim) behaves the same? yes, 8.2.3995 ### Operating system/version PoP OS 22.04 ### Terminal name/version GNOME Terminal 3.44.0 using VTE 0.68.0 +BIDI +GNUTLS +ICU +SYSTEMD ### $TERM environment variable xterm-256color ### Installation apt ### How to reproduce the issue 1. In a UNIX shell, create a directory with several self-referential links: ``` $ mkdir foo $ cd foo $ ln -s ./ link1 $ ln -s ./ link2 $ ln -s ./ link3 $ ln -s ./ link4 $ ln -s ./ link5 $ ls link1 link2 link3 link4 link5 ``` 2. Open `nvim` in that directory: ``` $ vim --clean ``` 3. Expand `**`: ```vim :echom expand('**') ``` 4. Press enter ### Expected behavior Neovim should expand the wildcard within a reasonable amount of time. ### Actual behavior Neovim attempts to expand `**` for a very, very long time (I haven't let it run long enough to finish).
True
Expanding wildcards in directories with several self-referential links is very, very slow - ### Neovim version (nvim -v) 0.6.1 ### Vim (not Nvim) behaves the same? yes, 8.2.3995 ### Operating system/version PoP OS 22.04 ### Terminal name/version GNOME Terminal 3.44.0 using VTE 0.68.0 +BIDI +GNUTLS +ICU +SYSTEMD ### $TERM environment variable xterm-256color ### Installation apt ### How to reproduce the issue 1. In a UNIX shell, create a directory with several self-referential links: ``` $ mkdir foo $ cd foo $ ln -s ./ link1 $ ln -s ./ link2 $ ln -s ./ link3 $ ln -s ./ link4 $ ln -s ./ link5 $ ls link1 link2 link3 link4 link5 ``` 2. Open `nvim` in that directory: ``` $ vim --clean ``` 3. Expand `**`: ```vim :echom expand('**') ``` 4. Press enter ### Expected behavior Neovim should expand the wildcard within a reasonable amount of time. ### Actual behavior Neovim attempts to expand `**` for a very, very long time (I haven't let it run long enough to finish).
perf
expanding wildcards in directories with several self referential links is very very slow neovim version nvim v vim not nvim behaves the same yes operating system version pop os terminal name version gnome terminal using vte bidi gnutls icu systemd term environment variable xterm installation apt how to reproduce the issue in a unix shell create a directory with several self referential links mkdir foo cd foo ln s ln s ln s ln s ln s ls open nvim in that directory vim clean expand vim echom expand press enter expected behavior neovim should expand the wildcard within a reasonable amount of time actual behavior neovim attempts to expand for a very very long time i haven t let it run long enough to finish
1
31,545
14,988,488,580
IssuesEvent
2021-01-29 01:22:33
GaloisInc/crucible
https://api.github.com/repos/GaloisInc/crucible
opened
crux-mir performance regression
MIR crux performance
`crux-mir`'s `symb_eval/scalar/test1.rs` test case has gotten slower: on my machine, it takes around 2m30s to run in commit 9bb4b78b0412698046837d8057b99adf6ab81459 (when `crux-mir` was merged into this repository), and now takes around 3m30s as of 09532645133d39cda983bca67c2dbb91d05e4373 (current `master`). `git bisect` blames ced74d4a5e4d7eb01fdb9f3d6ada382a365fc22f "Add a configuration option that controls if the online backend methods for maintaining a connection with a solver are enabled" from #570. On ced74d4a5e4d7eb01fdb9f3d6ada382a365fc22f the test runs in about 3m20s, while on 12a06e3d24d25b2016b7491d7c282dcfd634380b (its immediate ancestor) it runs in 2m25s. I've been timing the test using the following command: ```sh time cabal v2-run -- crux-mir --assert-false-on-error -s z3 test/symb_eval/scalar/test1.rs ``` (After building first, so the `time` doesn't include the time spent building) Note that the two commits in question (ced74d4a5e4d7eb01fdb9f3d6ada382a365fc22f and 12a06e3d24d25b2016b7491d7c282dcfd634380b) don't build as-is, due to changes on Hackage. Both require changing `what4.cabal`'s bound on the `versions` dependency from `versions >= 3.5.2` to `versions >= 3.5.2 && < 4.0` (otherwise you'll get a type error about `Versions.VChunk` and `Versions.VUnit`). Furthermore, ced74d4a5e4d7eb01fdb9f3d6ada382a365fc22f requires the `crux-mir` build fix from d18fd5074433dad46c9917b159b8b9aaf6d8da24 - I applied it by running `git checkout d18fd50 -- crux-mir` in the top-level `crucible` directory.
True
crux-mir performance regression - `crux-mir`'s `symb_eval/scalar/test1.rs` test case has gotten slower: on my machine, it takes around 2m30s to run in commit 9bb4b78b0412698046837d8057b99adf6ab81459 (when `crux-mir` was merged into this repository), and now takes around 3m30s as of 09532645133d39cda983bca67c2dbb91d05e4373 (current `master`). `git bisect` blames ced74d4a5e4d7eb01fdb9f3d6ada382a365fc22f "Add a configuration option that controls if the online backend methods for maintaining a connection with a solver are enabled" from #570. On ced74d4a5e4d7eb01fdb9f3d6ada382a365fc22f the test runs in about 3m20s, while on 12a06e3d24d25b2016b7491d7c282dcfd634380b (its immediate ancestor) it runs in 2m25s. I've been timing the test using the following command: ```sh time cabal v2-run -- crux-mir --assert-false-on-error -s z3 test/symb_eval/scalar/test1.rs ``` (After building first, so the `time` doesn't include the time spent building) Note that the two commits in question (ced74d4a5e4d7eb01fdb9f3d6ada382a365fc22f and 12a06e3d24d25b2016b7491d7c282dcfd634380b) don't build as-is, due to changes on Hackage. Both require changing `what4.cabal`'s bound on the `versions` dependency from `versions >= 3.5.2` to `versions >= 3.5.2 && < 4.0` (otherwise you'll get a type error about `Versions.VChunk` and `Versions.VUnit`). Furthermore, ced74d4a5e4d7eb01fdb9f3d6ada382a365fc22f requires the `crux-mir` build fix from d18fd5074433dad46c9917b159b8b9aaf6d8da24 - I applied it by running `git checkout d18fd50 -- crux-mir` in the top-level `crucible` directory.
perf
crux mir performance regression crux mir s symb eval scalar rs test case has gotten slower on my machine it takes around to run in commit when crux mir was merged into this repository and now takes around as of current master git bisect blames add a configuration option that controls if the online backend methods for maintaining a connection with a solver are enabled from on the test runs in about while on its immediate ancestor it runs in i ve been timing the test using the following command sh time cabal run crux mir assert false on error s test symb eval scalar rs after building first so the time doesn t include the time spent building note that the two commits in question and don t build as is due to changes on hackage both require changing cabal s bound on the versions dependency from versions to versions otherwise you ll get a type error about versions vchunk and versions vunit furthermore requires the crux mir build fix from i applied it by running git checkout crux mir in the top level crucible directory
1
29,396
14,108,654,560
IssuesEvent
2020-11-06 18:11:06
nvm-sh/nvm
https://api.github.com/repos/nvm-sh/nvm
closed
init-nvm.sh - slow (mostly while executing "npm")
performance
<!-- Thank you for being interested in nvm! Please help us by filling out the following form if you‘re having trouble. If you have a feature request, or some other question, please feel free to clear out the form. Thanks! --> #### Operating system and version: #### `nvm debug` output: <details> <!-- do not delete the following blank line --> ```sh nvm --version: v0.35.0 $SHELL: /bin/bash $SHLVL: 1 ${HOME}: /home/kolorafa ${NVM_DIR}: '${HOME}/.nvm' ${PATH}: ${NVM_DIR}/versions/node/v8.17.0/bin:/usr/local/sbin:/usr/local/bin:/usr/bin:/opt/android-sdk/tools:/opt/android-sdk/tools/bin:/opt/COMODO:/opt/cuda/bin:/var/lib/flatpak/exports/bin:/usr/lib/jvm/default/bin:/usr/lib32/jvm/default/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl:/var/lib/snapd/snap/bin:/opt/xtensa-lx106-elf-gcc/bin:${HOME}/.cargo/bin:${HOME}/.local/bin $PREFIX: '' ${NPM_CONFIG_PREFIX}: '' $NVM_NODEJS_ORG_MIRROR: '' $NVM_IOJS_ORG_MIRROR: '' shell version: 'GNU bash, wersja 5.0.17(1)-release (x86_64-pc-linux-gnu)' uname -a: 'Linux 5.7.2-arch1-1 #1 SMP PREEMPT Wed, 10 Jun 2020 20:36:24 +0000 x86_64 GNU/Linux' OS version: Antergos Linux () curl: curl jest /usr/bin/curl, curl 7.70.0 (x86_64-pc-linux-gnu) libcurl/7.70.0 OpenSSL/1.1.1g zlib/1.2.11 libidn2/2.3.0 libpsl/0.21.0 (+libidn2/2.2.0) libssh2/1.9.0 nghttp2/1.41.0 wget: wget jest /usr/bin/wget, GNU Wget 1.20.3 zbudowany na systemie linux-gnu. ls: nie ma dostępu do 'git': Nie ma takiego pliku ani katalogu git: git jest /usr/bin/git, git version 2.27.0 ls: nie ma dostępu do 'grep': Nie ma takiego pliku ani katalogu grep: grep jest aliasem do grep --colour=auto', grep (GNU grep) 3.4 ls: nie ma dostępu do 'awk': Nie ma takiego pliku ani katalogu awk: awk jest /usr/bin/awk, GNU Awk 5.1.0, API: 3.0 (GNU MPFR 4.0.2, GNU MP 6.2.0) ls: nie ma dostępu do 'sed': Nie ma takiego pliku ani katalogu sed: sed jest /usr/bin/sed, sed (GNU sed) 4.8 ls: nie ma dostępu do 'cut': Nie ma takiego pliku ani katalogu cut: cut jest /usr/bin/cut, cut (GNU coreutils) 8.32 ls: nie ma dostępu do 'basename': Nie ma takiego pliku ani katalogu basename: basename jest /usr/bin/basename, basename (GNU coreutils) 8.32 ls: nie ma dostępu do 'rm': Nie ma takiego pliku ani katalogu rm: rm jest /usr/bin/rm, rm (GNU coreutils) 8.32 ls: nie ma dostępu do 'mkdir': Nie ma takiego pliku ani katalogu mkdir: mkdir jest /usr/bin/mkdir, mkdir (GNU coreutils) 8.32 ls: nie ma dostępu do 'xargs': Nie ma takiego pliku ani katalogu xargs: xargs jest /usr/bin/xargs, xargs (GNU findutils) 4.7.0 nvm current: v8.17.0 which node: ${NVM_DIR}/versions/node/v8.17.0/bin/node which iojs: which: no iojs in (${NVM_DIR}/versions/node/v8.17.0/bin:/usr/local/sbin:/usr/local/bin:/usr/bin:/opt/android-sdk/tools:/opt/android-sdk/tools/bin:/opt/COMODO:/opt/cuda/bin:/var/lib/flatpak/exports/bin:/usr/lib/jvm/default/bin:/usr/lib32/jvm/default/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl:/var/lib/snapd/snap/bin:/opt/xtensa-lx106-elf-gcc/bin:${HOME}/.cargo/bin:${HOME}/.local/bin) which npm: ${NVM_DIR}/versions/node/v8.17.0/bin/npm npm config get prefix: ${NVM_DIR}/versions/node/v8.17.0 npm root -g: ${NVM_DIR}/versions/node/v8.17.0/lib/node_modules ``` </details> #### `nvm ls` output: <details> <!-- do not delete the following blank line --> ```sh -> v8.17.0 v12.16.1 system default -> 8 (-> v8.17.0) node -> stable (-> v12.16.1) (default) stable -> 12.16 (-> v12.16.1) (default) iojs -> N/A (default) unstable -> N/A (default) lts/* -> lts/erbium (-> v12.16.1) lts/argon -> v4.9.1 (-> N/A) lts/boron -> v6.17.1 (-> N/A) lts/carbon -> v8.17.0 lts/dubnium -> v10.19.0 (-> N/A) lts/erbium -> v12.16.1 ``` </details> #### How did you install `nvm`? Arch - nvm AUR #### What steps did you perform? Open new gnome terminal by shortcut #### What happened? Terminal opened but bash didn't show for 20s (when HD was slowed down by copy) While HD moderatly used - bash show in 1-2s ( https://youtu.be/Ie4Nnml55-g ) While HD was very busy and terminal load in 20s, i used different already open terminal and while doing ps i found that it's stuck at "npm" command: ``` kolorafa 3024410 0.0 0.0 15844 9960 pts/9 Ss 08:07 0:00 | \_ bash kolorafa 3026045 0.0 0.0 13008 4632 pts/9 R+ 08:09 0:00 | | \_ ps auxf kolorafa 3025222 0.0 0.0 14364 8504 pts/11 Ss+ 08:08 0:00 | \_ bash kolorafa 3025440 0.4 0.1 1036508 39904 pts/11 Dl+ 08:08 0:00 | | \_ npm kolorafa 3025697 0.1 0.0 14364 8348 pts/13 Ss+ 08:09 0:00 | \_ bash kolorafa 3025914 0.8 0.1 1036512 40236 pts/13 Dl+ 08:09 0:00 | \_ npm ``` #### What did you expect to happen? NVM loads without slowing down anything ;) #### Is there anything in any of your profile files that modifies the `PATH`? Don't think anything related
True
init-nvm.sh - slow (mostly while executing "npm") - <!-- Thank you for being interested in nvm! Please help us by filling out the following form if you‘re having trouble. If you have a feature request, or some other question, please feel free to clear out the form. Thanks! --> #### Operating system and version: #### `nvm debug` output: <details> <!-- do not delete the following blank line --> ```sh nvm --version: v0.35.0 $SHELL: /bin/bash $SHLVL: 1 ${HOME}: /home/kolorafa ${NVM_DIR}: '${HOME}/.nvm' ${PATH}: ${NVM_DIR}/versions/node/v8.17.0/bin:/usr/local/sbin:/usr/local/bin:/usr/bin:/opt/android-sdk/tools:/opt/android-sdk/tools/bin:/opt/COMODO:/opt/cuda/bin:/var/lib/flatpak/exports/bin:/usr/lib/jvm/default/bin:/usr/lib32/jvm/default/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl:/var/lib/snapd/snap/bin:/opt/xtensa-lx106-elf-gcc/bin:${HOME}/.cargo/bin:${HOME}/.local/bin $PREFIX: '' ${NPM_CONFIG_PREFIX}: '' $NVM_NODEJS_ORG_MIRROR: '' $NVM_IOJS_ORG_MIRROR: '' shell version: 'GNU bash, wersja 5.0.17(1)-release (x86_64-pc-linux-gnu)' uname -a: 'Linux 5.7.2-arch1-1 #1 SMP PREEMPT Wed, 10 Jun 2020 20:36:24 +0000 x86_64 GNU/Linux' OS version: Antergos Linux () curl: curl jest /usr/bin/curl, curl 7.70.0 (x86_64-pc-linux-gnu) libcurl/7.70.0 OpenSSL/1.1.1g zlib/1.2.11 libidn2/2.3.0 libpsl/0.21.0 (+libidn2/2.2.0) libssh2/1.9.0 nghttp2/1.41.0 wget: wget jest /usr/bin/wget, GNU Wget 1.20.3 zbudowany na systemie linux-gnu. ls: nie ma dostępu do 'git': Nie ma takiego pliku ani katalogu git: git jest /usr/bin/git, git version 2.27.0 ls: nie ma dostępu do 'grep': Nie ma takiego pliku ani katalogu grep: grep jest aliasem do grep --colour=auto', grep (GNU grep) 3.4 ls: nie ma dostępu do 'awk': Nie ma takiego pliku ani katalogu awk: awk jest /usr/bin/awk, GNU Awk 5.1.0, API: 3.0 (GNU MPFR 4.0.2, GNU MP 6.2.0) ls: nie ma dostępu do 'sed': Nie ma takiego pliku ani katalogu sed: sed jest /usr/bin/sed, sed (GNU sed) 4.8 ls: nie ma dostępu do 'cut': Nie ma takiego pliku ani katalogu cut: cut jest /usr/bin/cut, cut (GNU coreutils) 8.32 ls: nie ma dostępu do 'basename': Nie ma takiego pliku ani katalogu basename: basename jest /usr/bin/basename, basename (GNU coreutils) 8.32 ls: nie ma dostępu do 'rm': Nie ma takiego pliku ani katalogu rm: rm jest /usr/bin/rm, rm (GNU coreutils) 8.32 ls: nie ma dostępu do 'mkdir': Nie ma takiego pliku ani katalogu mkdir: mkdir jest /usr/bin/mkdir, mkdir (GNU coreutils) 8.32 ls: nie ma dostępu do 'xargs': Nie ma takiego pliku ani katalogu xargs: xargs jest /usr/bin/xargs, xargs (GNU findutils) 4.7.0 nvm current: v8.17.0 which node: ${NVM_DIR}/versions/node/v8.17.0/bin/node which iojs: which: no iojs in (${NVM_DIR}/versions/node/v8.17.0/bin:/usr/local/sbin:/usr/local/bin:/usr/bin:/opt/android-sdk/tools:/opt/android-sdk/tools/bin:/opt/COMODO:/opt/cuda/bin:/var/lib/flatpak/exports/bin:/usr/lib/jvm/default/bin:/usr/lib32/jvm/default/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl:/var/lib/snapd/snap/bin:/opt/xtensa-lx106-elf-gcc/bin:${HOME}/.cargo/bin:${HOME}/.local/bin) which npm: ${NVM_DIR}/versions/node/v8.17.0/bin/npm npm config get prefix: ${NVM_DIR}/versions/node/v8.17.0 npm root -g: ${NVM_DIR}/versions/node/v8.17.0/lib/node_modules ``` </details> #### `nvm ls` output: <details> <!-- do not delete the following blank line --> ```sh -> v8.17.0 v12.16.1 system default -> 8 (-> v8.17.0) node -> stable (-> v12.16.1) (default) stable -> 12.16 (-> v12.16.1) (default) iojs -> N/A (default) unstable -> N/A (default) lts/* -> lts/erbium (-> v12.16.1) lts/argon -> v4.9.1 (-> N/A) lts/boron -> v6.17.1 (-> N/A) lts/carbon -> v8.17.0 lts/dubnium -> v10.19.0 (-> N/A) lts/erbium -> v12.16.1 ``` </details> #### How did you install `nvm`? Arch - nvm AUR #### What steps did you perform? Open new gnome terminal by shortcut #### What happened? Terminal opened but bash didn't show for 20s (when HD was slowed down by copy) While HD moderatly used - bash show in 1-2s ( https://youtu.be/Ie4Nnml55-g ) While HD was very busy and terminal load in 20s, i used different already open terminal and while doing ps i found that it's stuck at "npm" command: ``` kolorafa 3024410 0.0 0.0 15844 9960 pts/9 Ss 08:07 0:00 | \_ bash kolorafa 3026045 0.0 0.0 13008 4632 pts/9 R+ 08:09 0:00 | | \_ ps auxf kolorafa 3025222 0.0 0.0 14364 8504 pts/11 Ss+ 08:08 0:00 | \_ bash kolorafa 3025440 0.4 0.1 1036508 39904 pts/11 Dl+ 08:08 0:00 | | \_ npm kolorafa 3025697 0.1 0.0 14364 8348 pts/13 Ss+ 08:09 0:00 | \_ bash kolorafa 3025914 0.8 0.1 1036512 40236 pts/13 Dl+ 08:09 0:00 | \_ npm ``` #### What did you expect to happen? NVM loads without slowing down anything ;) #### Is there anything in any of your profile files that modifies the `PATH`? Don't think anything related
perf
init nvm sh slow mostly while executing npm operating system and version nvm debug output sh nvm version shell bin bash shlvl home home kolorafa nvm dir home nvm path nvm dir versions node bin usr local sbin usr local bin usr bin opt android sdk tools opt android sdk tools bin opt comodo opt cuda bin var lib flatpak exports bin usr lib jvm default bin usr jvm default bin usr bin site perl usr bin vendor perl usr bin core perl var lib snapd snap bin opt xtensa elf gcc bin home cargo bin home local bin prefix npm config prefix nvm nodejs org mirror nvm iojs org mirror shell version gnu bash wersja release pc linux gnu uname a linux smp preempt wed jun gnu linux os version antergos linux curl curl jest usr bin curl curl pc linux gnu libcurl openssl zlib libpsl wget wget jest usr bin wget gnu wget zbudowany na systemie linux gnu ls nie ma dostępu do git nie ma takiego pliku ani katalogu git git jest usr bin git git version ls nie ma dostępu do grep nie ma takiego pliku ani katalogu grep grep jest aliasem do grep colour auto grep gnu grep ls nie ma dostępu do awk nie ma takiego pliku ani katalogu awk awk jest usr bin awk gnu awk api gnu mpfr gnu mp ls nie ma dostępu do sed nie ma takiego pliku ani katalogu sed sed jest usr bin sed sed gnu sed ls nie ma dostępu do cut nie ma takiego pliku ani katalogu cut cut jest usr bin cut cut gnu coreutils ls nie ma dostępu do basename nie ma takiego pliku ani katalogu basename basename jest usr bin basename basename gnu coreutils ls nie ma dostępu do rm nie ma takiego pliku ani katalogu rm rm jest usr bin rm rm gnu coreutils ls nie ma dostępu do mkdir nie ma takiego pliku ani katalogu mkdir mkdir jest usr bin mkdir mkdir gnu coreutils ls nie ma dostępu do xargs nie ma takiego pliku ani katalogu xargs xargs jest usr bin xargs xargs gnu findutils nvm current which node nvm dir versions node bin node which iojs which no iojs in nvm dir versions node bin usr local sbin usr local bin usr bin opt android sdk tools opt android sdk tools bin opt comodo opt cuda bin var lib flatpak exports bin usr lib jvm default bin usr jvm default bin usr bin site perl usr bin vendor perl usr bin core perl var lib snapd snap bin opt xtensa elf gcc bin home cargo bin home local bin which npm nvm dir versions node bin npm npm config get prefix nvm dir versions node npm root g nvm dir versions node lib node modules nvm ls output sh system default node stable default stable default iojs n a default unstable n a default lts lts erbium lts argon n a lts boron n a lts carbon lts dubnium n a lts erbium how did you install nvm arch nvm aur what steps did you perform open new gnome terminal by shortcut what happened terminal opened but bash didn t show for when hd was slowed down by copy while hd moderatly used bash show in while hd was very busy and terminal load in i used different already open terminal and while doing ps i found that it s stuck at npm command kolorafa pts ss bash kolorafa pts r ps auxf kolorafa pts ss bash kolorafa pts dl npm kolorafa pts ss bash kolorafa pts dl npm what did you expect to happen nvm loads without slowing down anything is there anything in any of your profile files that modifies the path don t think anything related
1
81,369
23,449,062,922
IssuesEvent
2022-08-15 23:21:53
trilinos/Trilinos
https://api.github.com/repos/trilinos/Trilinos
closed
Many packages erroneously using abs dir CMAKE_PREFIX_PATH for argument to install()
type: bug pkg: Kokkos pkg: STK pkg: Teuchos pkg: Intrepid2 pkg: ROL pkg: ShyLU impacting: configure or build pkg: KokkosKernels pkg: Krino
@tasmith4 ## Bug Report @trilinos/kokkos, @trilinos/kokkos-kernels, @trilinos/teuchos ### Description While setting up automated installation testing for Trilinos needed to build and run tests for simpleBuildAgainstTrilinos, I ran into a problem with Kokkos and KokkosKernels install() commands using abs dir for the DESTINATION. For example, Kokkos/CMakeLists.txt has: ``` IF (KOKKOS_HAS_TRILINOS) SET(TRILINOS_INCDIR ${CMAKE_INSTALL_PREFIX}/${${PROJECT_NAME}_INSTALL_INCLUDE_DIR}) ... ``` Fixing this is easy. Just make it: ``` IF (KOKKOS_HAS_TRILINOS) SET(TRILINOS_INCDIR ${${PROJECT_NAME}_INSTALL_INCLUDE_DIR}) ... ``` As explained in the [CMake install() documentation](https://cmake.org/cmake/help/v3.17/command/install.html): > If a relative path is given it is interpreted relative to the value of the [CMAKE_INSTALL_PREFIX](https://cmake.org/cmake/help/v3.17/variable/CMAKE_INSTALL_PREFIX.html#variable:CMAKE_INSTALL_PREFIX) variable. That allows using: ``` $ cmake --install . --prefix <some-other-path> ``` ### Steps to Reproduce 1. SHA1: 042bdd7648a 1. Configure script: Any configure script 1. Configure log: TBD 1. Build log: TBD 1. Input deck: N.A. 1. Configure the project with Kokkos, KokkosKernels, and Teuchos enabled **without setting CMAKE_INSTALL_PREFIX** and and then run `cmake --install . --prefix ${PWD}/install`.
1.0
Many packages erroneously using abs dir CMAKE_PREFIX_PATH for argument to install() - @tasmith4 ## Bug Report @trilinos/kokkos, @trilinos/kokkos-kernels, @trilinos/teuchos ### Description While setting up automated installation testing for Trilinos needed to build and run tests for simpleBuildAgainstTrilinos, I ran into a problem with Kokkos and KokkosKernels install() commands using abs dir for the DESTINATION. For example, Kokkos/CMakeLists.txt has: ``` IF (KOKKOS_HAS_TRILINOS) SET(TRILINOS_INCDIR ${CMAKE_INSTALL_PREFIX}/${${PROJECT_NAME}_INSTALL_INCLUDE_DIR}) ... ``` Fixing this is easy. Just make it: ``` IF (KOKKOS_HAS_TRILINOS) SET(TRILINOS_INCDIR ${${PROJECT_NAME}_INSTALL_INCLUDE_DIR}) ... ``` As explained in the [CMake install() documentation](https://cmake.org/cmake/help/v3.17/command/install.html): > If a relative path is given it is interpreted relative to the value of the [CMAKE_INSTALL_PREFIX](https://cmake.org/cmake/help/v3.17/variable/CMAKE_INSTALL_PREFIX.html#variable:CMAKE_INSTALL_PREFIX) variable. That allows using: ``` $ cmake --install . --prefix <some-other-path> ``` ### Steps to Reproduce 1. SHA1: 042bdd7648a 1. Configure script: Any configure script 1. Configure log: TBD 1. Build log: TBD 1. Input deck: N.A. 1. Configure the project with Kokkos, KokkosKernels, and Teuchos enabled **without setting CMAKE_INSTALL_PREFIX** and and then run `cmake --install . --prefix ${PWD}/install`.
non_perf
many packages erroneously using abs dir cmake prefix path for argument to install bug report trilinos kokkos trilinos kokkos kernels trilinos teuchos description while setting up automated installation testing for trilinos needed to build and run tests for simplebuildagainsttrilinos i ran into a problem with kokkos and kokkoskernels install commands using abs dir for the destination for example kokkos cmakelists txt has if kokkos has trilinos set trilinos incdir cmake install prefix project name install include dir fixing this is easy just make it if kokkos has trilinos set trilinos incdir project name install include dir as explained in the if a relative path is given it is interpreted relative to the value of the variable that allows using cmake install prefix steps to reproduce configure script any configure script configure log tbd build log tbd input deck n a configure the project with kokkos kokkoskernels and teuchos enabled without setting cmake install prefix and and then run cmake install prefix pwd install
0
89,670
18,019,568,097
IssuesEvent
2021-09-16 17:36:22
WordPress/openverse-frontend
https://api.github.com/repos/WordPress/openverse-frontend
closed
[Bug] Managing playback of multiple media files
🟧 priority: high 🛠 goal: fix 💻 aspect: code
## Description <!-- Concisely describe the bug. --> The current setup allows for multiple audio files to be played concurrently, which is a bad user experience. ## Reproduction <!-- Provide detailed steps to reproduce the bug. --> 1. View any page with multiple audio players 2. Press play on multiple audio players 3. Listen to the resulting 'chaos orchestra' ## Expectation <!-- Concisely describe what you expected to happen. --> When pressing 'play' on an audio file, if there is _already_ an active audio file it should be paused. ## Screenshots <!-- Add screenshots to show the problem; or delete the section entirely. --> ## Resolution <!-- Replace the [ ] with [x] to check the box. --> I have proposed a solution in #183. - [ ] 🙋 I would be interested in resolving this bug.
1.0
[Bug] Managing playback of multiple media files - ## Description <!-- Concisely describe the bug. --> The current setup allows for multiple audio files to be played concurrently, which is a bad user experience. ## Reproduction <!-- Provide detailed steps to reproduce the bug. --> 1. View any page with multiple audio players 2. Press play on multiple audio players 3. Listen to the resulting 'chaos orchestra' ## Expectation <!-- Concisely describe what you expected to happen. --> When pressing 'play' on an audio file, if there is _already_ an active audio file it should be paused. ## Screenshots <!-- Add screenshots to show the problem; or delete the section entirely. --> ## Resolution <!-- Replace the [ ] with [x] to check the box. --> I have proposed a solution in #183. - [ ] 🙋 I would be interested in resolving this bug.
non_perf
managing playback of multiple media files description the current setup allows for multiple audio files to be played concurrently which is a bad user experience reproduction view any page with multiple audio players press play on multiple audio players listen to the resulting chaos orchestra expectation when pressing play on an audio file if there is already an active audio file it should be paused screenshots resolution i have proposed a solution in 🙋 i would be interested in resolving this bug
0
28,649
13,771,817,941
IssuesEvent
2020-10-07 22:52:01
metabase/metabase
https://api.github.com/repos/metabase/metabase
closed
Option to limit tables/fields that are synced to specific schemas
Administration/Data Model Administration/Metadata & Sync Type:New Feature Type:Performance
We're working on a relatively small system in redshift that uses a shared redshift cluster. There are lots of schemas most of which are not ours (and we don't have permission to read) but our local postgres is becoming unnecessarily heavy due to the automatic caching of fields in all schemas into metabase_field (it would likely be a few hundred rows, but is getting towards a hundred thousand). A possible enhancement would be to have the option to limit field caching to schemas that have been viewed rather than crawling all schemas up front.
True
Option to limit tables/fields that are synced to specific schemas - We're working on a relatively small system in redshift that uses a shared redshift cluster. There are lots of schemas most of which are not ours (and we don't have permission to read) but our local postgres is becoming unnecessarily heavy due to the automatic caching of fields in all schemas into metabase_field (it would likely be a few hundred rows, but is getting towards a hundred thousand). A possible enhancement would be to have the option to limit field caching to schemas that have been viewed rather than crawling all schemas up front.
perf
option to limit tables fields that are synced to specific schemas we re working on a relatively small system in redshift that uses a shared redshift cluster there are lots of schemas most of which are not ours and we don t have permission to read but our local postgres is becoming unnecessarily heavy due to the automatic caching of fields in all schemas into metabase field it would likely be a few hundred rows but is getting towards a hundred thousand a possible enhancement would be to have the option to limit field caching to schemas that have been viewed rather than crawling all schemas up front
1
117,388
25,106,327,580
IssuesEvent
2022-11-08 16:53:29
serbanghita/Mobile-Detect
https://api.github.com/repos/serbanghita/Mobile-Detect
closed
Feature request: getDevice and getDeviceCategory
area: Code Quality deprecated
Great and usefull work! Is there already a way to get a string which device was detected? If not, it would be great to get 2 methods: function getDevice() -> returns the device which was detected (like "iPhone", "iPad", "Samsung", "SamsungTablet" etc). And (just for convenience): function getDeviceCategory() -> returns "mobile", "tablet" or "desktop" as a string
1.0
Feature request: getDevice and getDeviceCategory - Great and usefull work! Is there already a way to get a string which device was detected? If not, it would be great to get 2 methods: function getDevice() -> returns the device which was detected (like "iPhone", "iPad", "Samsung", "SamsungTablet" etc). And (just for convenience): function getDeviceCategory() -> returns "mobile", "tablet" or "desktop" as a string
non_perf
feature request getdevice and getdevicecategory great and usefull work is there already a way to get a string which device was detected if not it would be great to get methods function getdevice returns the device which was detected like iphone ipad samsung samsungtablet etc and just for convenience function getdevicecategory returns mobile tablet or desktop as a string
0
30,317
14,517,239,049
IssuesEvent
2020-12-13 18:50:44
johnboiles/obs-mac-virtualcam
https://api.github.com/repos/johnboiles/obs-mac-virtualcam
opened
Use `OBSDALCMSampleBufferCreateFromDataNoCopy` to eliminate a framebuffer memory copy
enhancement performance
We could potentially improve performance by using [`OBSDALCMSampleBufferCreateFromDataNoCopy`](https://github.com/johnboiles/obs-mac-virtualcam/blob/d6b5db2f07d92e354e5a1f8f8783360eaca1c8bc/src/dal-plugin/OBSDALCMSampleBufferUtils.mm#L102) instead of `OBSDALCMSampleBufferCreateFromData` which should remove a memory copy of the framebuffer, saving a bit of performance and latency transferring the frames from OBS to the virtual camera. When I tried this, it seemed to work just fine when using the OBS Virtual Camera device in other programs. But when using the plugin as a source in OBS (looping back the output of OBS), it didn't work. Strangely when I'd set the source to a lower resolution than the output it would work for some reason. This made me worried that using `OBSDALCMSampleBufferCreateFromDataNoCopy` could cause problems in some programs. I'm really not sure why this is, but it's possible this is a bug in OBS, and in fact this plugin would be just fine eliminating this memory copy. More investigation is needed.
True
Use `OBSDALCMSampleBufferCreateFromDataNoCopy` to eliminate a framebuffer memory copy - We could potentially improve performance by using [`OBSDALCMSampleBufferCreateFromDataNoCopy`](https://github.com/johnboiles/obs-mac-virtualcam/blob/d6b5db2f07d92e354e5a1f8f8783360eaca1c8bc/src/dal-plugin/OBSDALCMSampleBufferUtils.mm#L102) instead of `OBSDALCMSampleBufferCreateFromData` which should remove a memory copy of the framebuffer, saving a bit of performance and latency transferring the frames from OBS to the virtual camera. When I tried this, it seemed to work just fine when using the OBS Virtual Camera device in other programs. But when using the plugin as a source in OBS (looping back the output of OBS), it didn't work. Strangely when I'd set the source to a lower resolution than the output it would work for some reason. This made me worried that using `OBSDALCMSampleBufferCreateFromDataNoCopy` could cause problems in some programs. I'm really not sure why this is, but it's possible this is a bug in OBS, and in fact this plugin would be just fine eliminating this memory copy. More investigation is needed.
perf
use obsdalcmsamplebuffercreatefromdatanocopy to eliminate a framebuffer memory copy we could potentially improve performance by using instead of obsdalcmsamplebuffercreatefromdata which should remove a memory copy of the framebuffer saving a bit of performance and latency transferring the frames from obs to the virtual camera when i tried this it seemed to work just fine when using the obs virtual camera device in other programs but when using the plugin as a source in obs looping back the output of obs it didn t work strangely when i d set the source to a lower resolution than the output it would work for some reason this made me worried that using obsdalcmsamplebuffercreatefromdatanocopy could cause problems in some programs i m really not sure why this is but it s possible this is a bug in obs and in fact this plugin would be just fine eliminating this memory copy more investigation is needed
1
11,625
7,625,401,191
IssuesEvent
2018-05-03 21:19:07
Microsoft/BotBuilder
https://api.github.com/repos/Microsoft/BotBuilder
closed
Bot memory leak in .Net SDK
.NET SDK bug investigate performance
## Bot Info * SDK Platform: .NET * SDK Version: 3.11.0 ## Issue Description Everytime u send a msg to the bot the memory usage stacks up but never gets down. even when the client ends the session with the bot In the picture u can see the diagnostic session from visual studio the events around 1:10min are msgs the bot received from the client ![botframework](https://user-images.githubusercontent.com/18367963/33077168-56e54376-cecf-11e7-8c9d-0bf54b7c1d6d.png) ## Code Example look at Step 1 from reproduction steps ## Reproduction Steps 1. use the visual studio template https://docs.microsoft.com/en-us/bot-framework/dotnet/bot-builder-dotnet-quickstart 2. update to the newest BotBuilder SDK 3. run the bot and send some msgs from the botframework-emulator ## Expected Behavior No Memory Leak ## Actual Results Memory Leak ### Some additional information im pretty new to the bot framework so i dont know if this a feature and memory gets cleaned after some hours or is this a real problem
True
Bot memory leak in .Net SDK - ## Bot Info * SDK Platform: .NET * SDK Version: 3.11.0 ## Issue Description Everytime u send a msg to the bot the memory usage stacks up but never gets down. even when the client ends the session with the bot In the picture u can see the diagnostic session from visual studio the events around 1:10min are msgs the bot received from the client ![botframework](https://user-images.githubusercontent.com/18367963/33077168-56e54376-cecf-11e7-8c9d-0bf54b7c1d6d.png) ## Code Example look at Step 1 from reproduction steps ## Reproduction Steps 1. use the visual studio template https://docs.microsoft.com/en-us/bot-framework/dotnet/bot-builder-dotnet-quickstart 2. update to the newest BotBuilder SDK 3. run the bot and send some msgs from the botframework-emulator ## Expected Behavior No Memory Leak ## Actual Results Memory Leak ### Some additional information im pretty new to the bot framework so i dont know if this a feature and memory gets cleaned after some hours or is this a real problem
perf
bot memory leak in net sdk bot info sdk platform net sdk version issue description everytime u send a msg to the bot the memory usage stacks up but never gets down even when the client ends the session with the bot in the picture u can see the diagnostic session from visual studio the events around are msgs the bot received from the client code example look at step from reproduction steps reproduction steps use the visual studio template update to the newest botbuilder sdk run the bot and send some msgs from the botframework emulator expected behavior no memory leak actual results memory leak some additional information im pretty new to the bot framework so i dont know if this a feature and memory gets cleaned after some hours or is this a real problem
1
28,103
13,531,456,961
IssuesEvent
2020-09-15 21:41:13
osate/osate2
https://api.github.com/repos/osate/osate2
opened
Populating AADL property values view is slow
category:performance core
<!-- If you want to ask a question or if you are not sure if there really is a bug in OSATE, post on the google group first, please. https://groups.google.com/forum/#!forum/osate --> <!-- Use regular sentence capitalization in issue title and use the preview tab to check the bug report before submitting it, please. --> ## Summary <!--- Briefly describe the problem, and what you're trying to accomplish. Screenshots or other files should be attached directly to this issue. Don't attach binary files, such as Word documents, please. --> ## Expected and Current Behavior <!--- What should be happening, but isn't? What is happening instead? --> ## Steps to Reproduce <!--- If you can provide a small model or test case that demonstrates the issue, it will be much easier to debug. Paste small code snippets in the three ```backticks``` below, larger ones should be put in a [gist](gist.github.com) --> 1. 2. 3. ```aadl Paste your model here ``` ## Environment * **OSATE Version**: * **Operating System**: <!-- Windows / Mac / Linux and version number -->
True
Populating AADL property values view is slow - <!-- If you want to ask a question or if you are not sure if there really is a bug in OSATE, post on the google group first, please. https://groups.google.com/forum/#!forum/osate --> <!-- Use regular sentence capitalization in issue title and use the preview tab to check the bug report before submitting it, please. --> ## Summary <!--- Briefly describe the problem, and what you're trying to accomplish. Screenshots or other files should be attached directly to this issue. Don't attach binary files, such as Word documents, please. --> ## Expected and Current Behavior <!--- What should be happening, but isn't? What is happening instead? --> ## Steps to Reproduce <!--- If you can provide a small model or test case that demonstrates the issue, it will be much easier to debug. Paste small code snippets in the three ```backticks``` below, larger ones should be put in a [gist](gist.github.com) --> 1. 2. 3. ```aadl Paste your model here ``` ## Environment * **OSATE Version**: * **Operating System**: <!-- Windows / Mac / Linux and version number -->
perf
populating aadl property values view is slow if you want to ask a question or if you are not sure if there really is a bug in osate post on the google group first please use regular sentence capitalization in issue title and use the preview tab to check the bug report before submitting it please summary briefly describe the problem and what you re trying to accomplish screenshots or other files should be attached directly to this issue don t attach binary files such as word documents please expected and current behavior what should be happening but isn t what is happening instead steps to reproduce if you can provide a small model or test case that demonstrates the issue it will be much easier to debug paste small code snippets in the three backticks below larger ones should be put in a gist github com aadl paste your model here environment osate version operating system
1
16,662
9,481,659,817
IssuesEvent
2019-04-21 07:35:44
Mararsh/MyBox
https://api.github.com/repos/Mararsh/MyBox
closed
Monitor stages openning and closing
New Feature/Function Performance
No general method about stages opened: "StageHelper.getStages()" in Java 8 while " Window.getWindows()" after Java 8. So have to manage the opened stages by MyBox itself.
True
Monitor stages openning and closing - No general method about stages opened: "StageHelper.getStages()" in Java 8 while " Window.getWindows()" after Java 8. So have to manage the opened stages by MyBox itself.
perf
monitor stages openning and closing no general method about stages opened stagehelper getstages in java while window getwindows after java so have to manage the opened stages by mybox itself
1
36,563
15,026,149,549
IssuesEvent
2021-02-01 22:11:57
MicrosoftDocs/azure-docs
https://api.github.com/repos/MicrosoftDocs/azure-docs
closed
RenewLock API missing?
Pri2 assigned-to-author product-question service-bus-messaging/svc triaged
I couldn't find any API in the ruby sdk to renew or extend the lock for a given message. In C# it is [this](https://docs.microsoft.com/en-us/dotnet/api/microsoft.azure.servicebus.core.messagereceiver.renewlockasync?view=azure-dotnet). Also, is there any support for a [AutoRenewTimeout property](https://docs.microsoft.com/en-us/dotnet/api/microsoft.servicebus.messaging.onmessageoptions.autorenewtimeout?view=azure-dotnet) like we have in C# sdk? --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: e5d9aaf6-1dd6-bb09-bb52-ddd1bc9a2719 * Version Independent ID: 2ad3512c-f825-b34c-e1c6-d982e90a534d * Content: [How to use Azure Service Bus queues with Ruby](https://docs.microsoft.com/en-us/azure/service-bus-messaging/service-bus-ruby-how-to-use-queues#feedback) * Content Source: [articles/service-bus-messaging/service-bus-ruby-how-to-use-queues.md](https://github.com/Microsoft/azure-docs/blob/master/articles/service-bus-messaging/service-bus-ruby-how-to-use-queues.md) * Service: **service-bus-messaging** * GitHub Login: @axisc * Microsoft Alias: **aschhab**
1.0
RenewLock API missing? - I couldn't find any API in the ruby sdk to renew or extend the lock for a given message. In C# it is [this](https://docs.microsoft.com/en-us/dotnet/api/microsoft.azure.servicebus.core.messagereceiver.renewlockasync?view=azure-dotnet). Also, is there any support for a [AutoRenewTimeout property](https://docs.microsoft.com/en-us/dotnet/api/microsoft.servicebus.messaging.onmessageoptions.autorenewtimeout?view=azure-dotnet) like we have in C# sdk? --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: e5d9aaf6-1dd6-bb09-bb52-ddd1bc9a2719 * Version Independent ID: 2ad3512c-f825-b34c-e1c6-d982e90a534d * Content: [How to use Azure Service Bus queues with Ruby](https://docs.microsoft.com/en-us/azure/service-bus-messaging/service-bus-ruby-how-to-use-queues#feedback) * Content Source: [articles/service-bus-messaging/service-bus-ruby-how-to-use-queues.md](https://github.com/Microsoft/azure-docs/blob/master/articles/service-bus-messaging/service-bus-ruby-how-to-use-queues.md) * Service: **service-bus-messaging** * GitHub Login: @axisc * Microsoft Alias: **aschhab**
non_perf
renewlock api missing i couldn t find any api in the ruby sdk to renew or extend the lock for a given message in c it is also is there any support for a like we have in c sdk document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service service bus messaging github login axisc microsoft alias aschhab
0
389,586
11,504,254,038
IssuesEvent
2020-02-12 22:51:05
google/ground-android
https://api.github.com/repos/google/ground-android
closed
[Code health] Replace Firestore toObject() with runtime type checking
priority: p2 type: cleanup
`DocumentSnapshot#toObject()` auto fills POJOs, but has some shortcomings: * Fields in the db with a different type than the POJO fields result in `toObject()` failing with and error. * We need Yet Another Model Class for the intermediate representation (e.g., `ProjectDoc`). * We still need field names in strings constants to do partial document updates and queries.
1.0
[Code health] Replace Firestore toObject() with runtime type checking - `DocumentSnapshot#toObject()` auto fills POJOs, but has some shortcomings: * Fields in the db with a different type than the POJO fields result in `toObject()` failing with and error. * We need Yet Another Model Class for the intermediate representation (e.g., `ProjectDoc`). * We still need field names in strings constants to do partial document updates and queries.
non_perf
replace firestore toobject with runtime type checking documentsnapshot toobject auto fills pojos but has some shortcomings fields in the db with a different type than the pojo fields result in toobject failing with and error we need yet another model class for the intermediate representation e g projectdoc we still need field names in strings constants to do partial document updates and queries
0
8,413
6,540,586,283
IssuesEvent
2017-09-01 16:00:17
scalameta/scalameta
https://api.github.com/repos/scalameta/scalameta
closed
Benchmark List vs Seq in Database
Performance
We changed Database.entries from List to Seq based on a hypothesis that Seq is going to be significantly more efficient (https://github.com/scalameta/scalameta/issues/996). We owe it to ourselves to confirm this hypothesis.
True
Benchmark List vs Seq in Database - We changed Database.entries from List to Seq based on a hypothesis that Seq is going to be significantly more efficient (https://github.com/scalameta/scalameta/issues/996). We owe it to ourselves to confirm this hypothesis.
perf
benchmark list vs seq in database we changed database entries from list to seq based on a hypothesis that seq is going to be significantly more efficient we owe it to ourselves to confirm this hypothesis
1
47,553
25,065,296,348
IssuesEvent
2022-11-07 07:44:07
TryQuiet/quiet
https://api.github.com/repos/TryQuiet/quiet
closed
Lag while typing messages on Android 13
bug performance
Steps to reproduce: 1. Join our existing community using Android 13 2. Send a few messages and wait a while Expected: characters appear immediately as you type. Actual: characters appear after a huge delay as you type. This is bad enough as to make the app unusable for me, on the latest hardware. Pixel 6. Sending a message also seems to make this bug occur, so we should check this case too. Steps: 1. Send a message 2. Try sending another message. Expected: you can type as normal. Actual: framerate lags.
True
Lag while typing messages on Android 13 - Steps to reproduce: 1. Join our existing community using Android 13 2. Send a few messages and wait a while Expected: characters appear immediately as you type. Actual: characters appear after a huge delay as you type. This is bad enough as to make the app unusable for me, on the latest hardware. Pixel 6. Sending a message also seems to make this bug occur, so we should check this case too. Steps: 1. Send a message 2. Try sending another message. Expected: you can type as normal. Actual: framerate lags.
perf
lag while typing messages on android steps to reproduce join our existing community using android send a few messages and wait a while expected characters appear immediately as you type actual characters appear after a huge delay as you type this is bad enough as to make the app unusable for me on the latest hardware pixel sending a message also seems to make this bug occur so we should check this case too steps send a message try sending another message expected you can type as normal actual framerate lags
1
16,092
20,261,157,186
IssuesEvent
2022-02-15 07:35:49
quark-engine/quark-engine
https://api.github.com/repos/quark-engine/quark-engine
closed
Prepare to release version v22.2.1
work-in-progress issue-processing-state-04
Update the version number in `__init__.py` for the latest release of Quark. In this release, the following changes will be included. - #301 - #304 - #300 - #303
1.0
Prepare to release version v22.2.1 - Update the version number in `__init__.py` for the latest release of Quark. In this release, the following changes will be included. - #301 - #304 - #300 - #303
non_perf
prepare to release version update the version number in init py for the latest release of quark in this release the following changes will be included
0
81,497
10,144,236,866
IssuesEvent
2019-08-04 19:05:07
JuliaRobotics/IncrementalInference.jl
https://api.github.com/repos/JuliaRobotics/IncrementalInference.jl
opened
Transition to LightGraphsDFG as default
design dfg upstream
what is the best way to introduce LightGraphsDFG and Arena. Guess the major decision is to move away from Graphs.jl to MetaGraphs.jl. Graphs.jl is on JuliaArchive, so long term prospects won't go too far there. Since Graphs.jl is working and has legacy I would not want just drop Graphs.jl outright. Can we make some plan to move LightGraphs.jl primary and drop Graphs.jl to secondary. So my suggestion is as follows: - IIF v0.8.x is clean new API on GraphsDFG and we start making sure LightGraphsDFG is ready to take the load, - IIF v0.9.x we switch to LightGraphsDFG as primary and demote GraphsDFG to secondary but keep around for several cycles, at least into IIF v1.0.x as secondary. If it all goes well, IIF v1.1.x can probably deprecate GraphsDFG. There will still be something to do about BayesTree which currently still depends on Graphs.jl and ExVertex.
1.0
Transition to LightGraphsDFG as default - what is the best way to introduce LightGraphsDFG and Arena. Guess the major decision is to move away from Graphs.jl to MetaGraphs.jl. Graphs.jl is on JuliaArchive, so long term prospects won't go too far there. Since Graphs.jl is working and has legacy I would not want just drop Graphs.jl outright. Can we make some plan to move LightGraphs.jl primary and drop Graphs.jl to secondary. So my suggestion is as follows: - IIF v0.8.x is clean new API on GraphsDFG and we start making sure LightGraphsDFG is ready to take the load, - IIF v0.9.x we switch to LightGraphsDFG as primary and demote GraphsDFG to secondary but keep around for several cycles, at least into IIF v1.0.x as secondary. If it all goes well, IIF v1.1.x can probably deprecate GraphsDFG. There will still be something to do about BayesTree which currently still depends on Graphs.jl and ExVertex.
non_perf
transition to lightgraphsdfg as default what is the best way to introduce lightgraphsdfg and arena guess the major decision is to move away from graphs jl to metagraphs jl graphs jl is on juliaarchive so long term prospects won t go too far there since graphs jl is working and has legacy i would not want just drop graphs jl outright can we make some plan to move lightgraphs jl primary and drop graphs jl to secondary so my suggestion is as follows iif x is clean new api on graphsdfg and we start making sure lightgraphsdfg is ready to take the load iif x we switch to lightgraphsdfg as primary and demote graphsdfg to secondary but keep around for several cycles at least into iif x as secondary if it all goes well iif x can probably deprecate graphsdfg there will still be something to do about bayestree which currently still depends on graphs jl and exvertex
0
363,635
25,459,588,312
IssuesEvent
2022-11-24 17:11:53
Juguetear/juguetear-web
https://api.github.com/repos/Juguetear/juguetear-web
closed
Armar un diagrama de la BBDD
documentation :books:
Habria que armar un diagrama de como se va a relacionar toda la información en la BBDD en base a los diseños. Diseños: https://www.figma.com/file/FS9WsAYrmkESsmUAGzYdNo/Juguetear?node-id=0%3A1 Diagrama: https://lucid.app/lucidchart/ca407ce2-9ad8-4fd9-bce4-35d46ed0cfc0/edit?view_items=yRKqh-dGCGV0&invitationId=inv_897a2d64-31c9-4fa1-b33c-ffda420497ab Documentación: https://www.sanity.io/docs/block-type
1.0
Armar un diagrama de la BBDD - Habria que armar un diagrama de como se va a relacionar toda la información en la BBDD en base a los diseños. Diseños: https://www.figma.com/file/FS9WsAYrmkESsmUAGzYdNo/Juguetear?node-id=0%3A1 Diagrama: https://lucid.app/lucidchart/ca407ce2-9ad8-4fd9-bce4-35d46ed0cfc0/edit?view_items=yRKqh-dGCGV0&invitationId=inv_897a2d64-31c9-4fa1-b33c-ffda420497ab Documentación: https://www.sanity.io/docs/block-type
non_perf
armar un diagrama de la bbdd habria que armar un diagrama de como se va a relacionar toda la información en la bbdd en base a los diseños diseños diagrama documentación
0
52,637
27,700,160,623
IssuesEvent
2023-03-14 07:17:14
rerun-io/rerun
https://api.github.com/repos/rerun-io/rerun
closed
The time panel is slow when viewing a lot of data points
📉 performance
Repro: run `just py-build && examples/clock/main.py --steps 250000` A lot of time is spent by the time panel to view all the data points. The problem is that all the logged times are stored in a `BTreeSet<TimeInt>` which is iterated through each frame (because who knows, it might have changed since last frame). One solution is to replace that `BTreeSet` with a custom hierarchial data structure where we can stop recursion when we reach some limit informed by the current zoom level, e.g. stop recursing once we reach a node that span a time that takes up less than one pixel in the time panel view. There we just summarize the children with their time span and count. The data structure would probably be something similar to a B-tree, but we can make a lot of simplifications based on the content being POD, and that the only mutating operations is insert and trim (remove everything before time T, for memory pruning). The primary query we need to support is to iterate through all the datapoints, but taking strides that have some fixed width (e.g. 5s). **EDIT**: I have a nice idea for how to do this now, without too much work. Will perhaps work on it on the plane next week.
True
The time panel is slow when viewing a lot of data points - Repro: run `just py-build && examples/clock/main.py --steps 250000` A lot of time is spent by the time panel to view all the data points. The problem is that all the logged times are stored in a `BTreeSet<TimeInt>` which is iterated through each frame (because who knows, it might have changed since last frame). One solution is to replace that `BTreeSet` with a custom hierarchial data structure where we can stop recursion when we reach some limit informed by the current zoom level, e.g. stop recursing once we reach a node that span a time that takes up less than one pixel in the time panel view. There we just summarize the children with their time span and count. The data structure would probably be something similar to a B-tree, but we can make a lot of simplifications based on the content being POD, and that the only mutating operations is insert and trim (remove everything before time T, for memory pruning). The primary query we need to support is to iterate through all the datapoints, but taking strides that have some fixed width (e.g. 5s). **EDIT**: I have a nice idea for how to do this now, without too much work. Will perhaps work on it on the plane next week.
perf
the time panel is slow when viewing a lot of data points repro run just py build examples clock main py steps a lot of time is spent by the time panel to view all the data points the problem is that all the logged times are stored in a btreeset which is iterated through each frame because who knows it might have changed since last frame one solution is to replace that btreeset with a custom hierarchial data structure where we can stop recursion when we reach some limit informed by the current zoom level e g stop recursing once we reach a node that span a time that takes up less than one pixel in the time panel view there we just summarize the children with their time span and count the data structure would probably be something similar to a b tree but we can make a lot of simplifications based on the content being pod and that the only mutating operations is insert and trim remove everything before time t for memory pruning the primary query we need to support is to iterate through all the datapoints but taking strides that have some fixed width e g edit i have a nice idea for how to do this now without too much work will perhaps work on it on the plane next week
1
59,067
17,015,410,758
IssuesEvent
2021-07-02 11:17:01
tomhughes/trac-tickets
https://api.github.com/repos/tomhughes/trac-tickets
opened
Right-to-left writing direction letters are not concatenated
Component: potlatch2 Priority: major Type: defect
**[Submitted to the original trac issue database at 10.36am, Wednesday, 9th March 2011]** How to reproduce: Enter a name on a node or way feature using a language with right-to-left writing direction (arabic, hebrew, farsi,...) like "". Result: Rendering shows each letter seperately, like "" (exagerrated dispay in here) Expected result: right-to-left letters should concatenate.
1.0
Right-to-left writing direction letters are not concatenated - **[Submitted to the original trac issue database at 10.36am, Wednesday, 9th March 2011]** How to reproduce: Enter a name on a node or way feature using a language with right-to-left writing direction (arabic, hebrew, farsi,...) like "". Result: Rendering shows each letter seperately, like "" (exagerrated dispay in here) Expected result: right-to-left letters should concatenate.
non_perf
right to left writing direction letters are not concatenated how to reproduce enter a name on a node or way feature using a language with right to left writing direction arabic hebrew farsi like result rendering shows each letter seperately like exagerrated dispay in here expected result right to left letters should concatenate
0
471,745
13,609,498,815
IssuesEvent
2020-09-23 05:25:46
WordPress/gutenberg
https://api.github.com/repos/WordPress/gutenberg
closed
Impossible to interact with block toolbars on Firefox
Browser Issues [Priority] High [Type] Bug [Type] Regression
On any block and any theme, just trying to hover/edit a block it's impossible to interract with the block's toolbar. As soon as I try, it disappears. Investigated using `git bisect`, the issue is a result of https://github.com/WordPress/gutenberg/pull/23034 **To reproduce** Steps to reproduce the behavior: 1. Add a paragraph, header or any other block 2. Try to hover its toolbar **Screenshots** ![Peek 2020-09-22 18-46](https://user-images.githubusercontent.com/588688/93906011-52426180-fd04-11ea-82f6-0384ccec60ca.gif) **Editor version:** - WordPress version: **5.6-alpha-49031** - Gutenberg version: master branch, currently on https://github.com/WordPress/gutenberg/tree/19be5f0bc01e7b972f9d6f15f19e17649bd22e1c **Desktop:** - OS: Pop!_OS (for all intents and purposes Ubuntu) - Browser Firefox 80.0.1
1.0
Impossible to interact with block toolbars on Firefox - On any block and any theme, just trying to hover/edit a block it's impossible to interract with the block's toolbar. As soon as I try, it disappears. Investigated using `git bisect`, the issue is a result of https://github.com/WordPress/gutenberg/pull/23034 **To reproduce** Steps to reproduce the behavior: 1. Add a paragraph, header or any other block 2. Try to hover its toolbar **Screenshots** ![Peek 2020-09-22 18-46](https://user-images.githubusercontent.com/588688/93906011-52426180-fd04-11ea-82f6-0384ccec60ca.gif) **Editor version:** - WordPress version: **5.6-alpha-49031** - Gutenberg version: master branch, currently on https://github.com/WordPress/gutenberg/tree/19be5f0bc01e7b972f9d6f15f19e17649bd22e1c **Desktop:** - OS: Pop!_OS (for all intents and purposes Ubuntu) - Browser Firefox 80.0.1
non_perf
impossible to interact with block toolbars on firefox on any block and any theme just trying to hover edit a block it s impossible to interract with the block s toolbar as soon as i try it disappears investigated using git bisect the issue is a result of to reproduce steps to reproduce the behavior add a paragraph header or any other block try to hover its toolbar screenshots editor version wordpress version alpha gutenberg version master branch currently on desktop os pop os for all intents and purposes ubuntu browser firefox
0
46,573
24,609,727,817
IssuesEvent
2022-10-14 19:59:49
mozilla-mobile/android-components
https://api.github.com/repos/mozilla-mobile/android-components
closed
Network `Response.Body.string()` would optimally allocate `Content-Length` header rather than 16 KiB default
performance perf:P3 perf:resource-use
_follow-up from https://github.com/mozilla-mobile/android-components/issues/11015 which is a child of https://github.com/mozilla-mobile/fenix/issues/21293_ In researching https://github.com/mozilla-mobile/fenix/issues/21293#issuecomment-921365710, I noticed that when we do the following STR: - initial state: search screen opened with "a" typed - type "s" We allocate 50 KiB bytes in `Response.Body.string()` irrespective of the received content size: - 22240 bytes Reader.readText - ~~20480 BufferedReader~~ _[ed: we've since addressed https://github.com/mozilla-mobile/android-components/issues/11015 so this value should 0]_ - 8480 InputStreamReader What's happening is that [`readText` allocates a buffer](https://github.com/JetBrains/kotlin/blob/92d200e093c693b3c06e53a39e0b0973b84c7ec5/libraries/stdlib/jvm/src/kotlin/io/ReadWrite.kt#L107) of [the default size](https://github.com/JetBrains/kotlin/blob/92d200e093c693b3c06e53a39e0b0973b84c7ec5/libraries/stdlib/jvm/src/kotlin/io/Constants.kt#L13), which is 8192 char = 16384 bytes = 16 KiB. **We can optimize the memory allocation by creating a buffer matching the length specified by the `Content-Length` header, e.g. if we're only received a string of 32 characters, we'd need 64 bytes, not 16k.** Since we don't know what the optimal large buffer size is, this should probably be `bufferLen = contentLengthHeader.coerceIn(1..DEFAULT_BUFFER_SIZE)`. In practice, it's difficult to determine the impact of allocating too much memory. It can theoretically cause power usage and cause memory churn in both the front-end and platform GCs. In practice, we haven't seen the Android GC seem to cause noticeable problems on a Moto G5, however. Furthermore, we don't know how often and by what magnitude the network requests we make are smaller than the default buffer size. Note: I didn't look into `InputStreamReader`'s allocations to see if we can optimize them. --- I wrote a proof-of-concept of this on my branch: https://github.com/mcomella/android-components/tree/host-21293-allocations ┆Issue is synchronized with this [Jira Task](https://mozilla-hub.atlassian.net/browse/FNXV2-18207)
True
Network `Response.Body.string()` would optimally allocate `Content-Length` header rather than 16 KiB default - _follow-up from https://github.com/mozilla-mobile/android-components/issues/11015 which is a child of https://github.com/mozilla-mobile/fenix/issues/21293_ In researching https://github.com/mozilla-mobile/fenix/issues/21293#issuecomment-921365710, I noticed that when we do the following STR: - initial state: search screen opened with "a" typed - type "s" We allocate 50 KiB bytes in `Response.Body.string()` irrespective of the received content size: - 22240 bytes Reader.readText - ~~20480 BufferedReader~~ _[ed: we've since addressed https://github.com/mozilla-mobile/android-components/issues/11015 so this value should 0]_ - 8480 InputStreamReader What's happening is that [`readText` allocates a buffer](https://github.com/JetBrains/kotlin/blob/92d200e093c693b3c06e53a39e0b0973b84c7ec5/libraries/stdlib/jvm/src/kotlin/io/ReadWrite.kt#L107) of [the default size](https://github.com/JetBrains/kotlin/blob/92d200e093c693b3c06e53a39e0b0973b84c7ec5/libraries/stdlib/jvm/src/kotlin/io/Constants.kt#L13), which is 8192 char = 16384 bytes = 16 KiB. **We can optimize the memory allocation by creating a buffer matching the length specified by the `Content-Length` header, e.g. if we're only received a string of 32 characters, we'd need 64 bytes, not 16k.** Since we don't know what the optimal large buffer size is, this should probably be `bufferLen = contentLengthHeader.coerceIn(1..DEFAULT_BUFFER_SIZE)`. In practice, it's difficult to determine the impact of allocating too much memory. It can theoretically cause power usage and cause memory churn in both the front-end and platform GCs. In practice, we haven't seen the Android GC seem to cause noticeable problems on a Moto G5, however. Furthermore, we don't know how often and by what magnitude the network requests we make are smaller than the default buffer size. Note: I didn't look into `InputStreamReader`'s allocations to see if we can optimize them. --- I wrote a proof-of-concept of this on my branch: https://github.com/mcomella/android-components/tree/host-21293-allocations ┆Issue is synchronized with this [Jira Task](https://mozilla-hub.atlassian.net/browse/FNXV2-18207)
perf
network response body string would optimally allocate content length header rather than kib default follow up from which is a child of in researching i noticed that when we do the following str initial state search screen opened with a typed type s we allocate kib bytes in response body string irrespective of the received content size bytes reader readtext bufferedreader inputstreamreader what s happening is that of which is char bytes kib we can optimize the memory allocation by creating a buffer matching the length specified by the content length header e g if we re only received a string of characters we d need bytes not since we don t know what the optimal large buffer size is this should probably be bufferlen contentlengthheader coercein default buffer size in practice it s difficult to determine the impact of allocating too much memory it can theoretically cause power usage and cause memory churn in both the front end and platform gcs in practice we haven t seen the android gc seem to cause noticeable problems on a moto however furthermore we don t know how often and by what magnitude the network requests we make are smaller than the default buffer size note i didn t look into inputstreamreader s allocations to see if we can optimize them i wrote a proof of concept of this on my branch ┆issue is synchronized with this
1
113,966
9,669,728,254
IssuesEvent
2019-05-21 18:05:55
rancher/rancher
https://api.github.com/repos/rancher/rancher
closed
UI Shows only 1000 nodes on a fresh load
[zube]: To Test internal kind/bug priority/1 status/ready-for-review status/resolved status/to-test status/triaged team/cn
**Rancher versions:** rancher/server or rancher/rancher: 2.0.x **Infrastructure Stack versions:** If you add more then 1000 nodes across all clusters and do a hard refresh on the global clusters tab you will only see the sum of all nodes equal a 1000.
2.0
UI Shows only 1000 nodes on a fresh load - **Rancher versions:** rancher/server or rancher/rancher: 2.0.x **Infrastructure Stack versions:** If you add more then 1000 nodes across all clusters and do a hard refresh on the global clusters tab you will only see the sum of all nodes equal a 1000.
non_perf
ui shows only nodes on a fresh load rancher versions rancher server or rancher rancher x infrastructure stack versions if you add more then nodes across all clusters and do a hard refresh on the global clusters tab you will only see the sum of all nodes equal a
0
12,565
7,919,614,768
IssuesEvent
2018-07-04 17:52:52
Alexander-Miller/treemacs
https://api.github.com/repos/Alexander-Miller/treemacs
opened
Granular filewatch updates
Enhancement Feature:Filewatch Performance
## First a description of the status quo: ### The shadow tree Every treemacs buffer possesses a backing data structure that is similar to a limited dom, except that I call it the shadow-tree, since it shadows the visible file tree(s). Specifically it mirrors the structure of expanded directories and tag groups. Every shadow-node in that tree is a struct that is made up of the following fields: * `key` - The node's unique key, usually its absolute path. This makes every node accessible in O(1) time through the hash table called the shadow-index (likewise unique in every treemacs buffer) * `parent` - The parent shadow-node, if any. * `children` - The immediate child nodes, if any. * `position` - A marker to the node's position. The same object as `treemacs-current-btn` would return. Not all nodes have a position. When a node is collapsed while it has expanded children of its own it is not removed from the shadow-tree. Instead its position is deleted and it is marked as closed. This way when the node is reopened its children will be reopened as well. * `closed` - Marker for closed nodes. See above. * `refresh-flag` - Marker for directories in need of being refreshed. ### The filewatch callback setup With `filewatch-mode` enabled treemacs will put under watch all directories that it displays, calling treemacs' callback function for every change in the watched directories. The exact makeup of the callback event argument can be read up in the docs for `file-notify-add-watch`. When it receives an event treemacs first decides if the event is relevant at all. This means it will ignore changes to lock or backup files. A file being *changed* is only of interest when `treemacs-git-mode` is enabled. The decision is ultimately user-configurable by means of `treemacs-ignored-file-predicates`. When a file was deleted or renamed treemacs will also update the shadow-trees in every buffer. As a last step treemacs will set the refresh flag of the directory the change *happened in* and start a timer to start a refresh after a delay of `treemacs-file-event-delay` ms. ### Recursive Descent Refresh On a refresh treemacs will - for every buffer, for every project in the buffer - begin descending its shadow tree. If a shadow-node's refresh flag is not set the descent continues recursively. If it is then treemacs will first do the aequivalent of moving to the node's position and pressing TAB twice, so as to refresh that directory and all its children. Since nodes below the one that was just refreshed may have their refresh flags set as well the descent will continue to reset these flags. A refresh therefore requires a full shadow tree traversal. ## What shoud change ### Weaknesses of the current approach While a major improvement over the very first version of `filewatch-mode`, which would simply refresh the entire project, the current system lacks accuracy and is typically overkill. Not only does it still refresh full projects - if only a file at the root was changed - refreshing at the directory level is still doing too much work when only one, or a few files have changed. The most common reason for filewatch-mode to strike is probably saving - again - the one file you are currently editing. ### Fine grained changes It'd be much more appropriate to apply only the actual changes and do the least amount of work necessary. So, given a set of changes, treemacs should learn to apply them one by one, as the amount of work needed for this can, in most cases, be expected to be much smaller than a hard refresh of an entire directory, especially if that directory is close to the root and refreshing it also entails reopening all its children. The refresh-flag, currently used just a boolean, can be changed to instead hold a list of files that were changed and the type of the change for a given directory. Upon a refresh descent treemacs could then make a decision whether to apply the changes one by one or do a full directory refresh based on the count or makeup of the refresh list. Different changes must be applied in different ways, and some are certainly more difficult than others. Deleting a file is simpler than creating one since the created file needs to be displayed in the right position, which is particularly challenging due to `treemacs-resort`. All in all there's the following cases to consider: * [ ] Deletion of a file * [ ] Deletion of a directory * [ ] Creation of a new file or directory * [ ] Renaming of a file or directory * [ ] Change to a file, requirung querying its git status In case of `git-mode` new files must be fontified accordingly. ### Don't get stuck The last time I implemented a major change I worked on it on a long-term basis, wanting to push in one go. This had the consequence that my published code and my wip code diverged to the point that by the end of it I couldn't even publish my bug fixes and needed to tell people to wait for the new feature to be finished first. Let's not do that this time. No non-functional intermediate state this time, the changes should come in small steps and be able to go on master.
True
Granular filewatch updates - ## First a description of the status quo: ### The shadow tree Every treemacs buffer possesses a backing data structure that is similar to a limited dom, except that I call it the shadow-tree, since it shadows the visible file tree(s). Specifically it mirrors the structure of expanded directories and tag groups. Every shadow-node in that tree is a struct that is made up of the following fields: * `key` - The node's unique key, usually its absolute path. This makes every node accessible in O(1) time through the hash table called the shadow-index (likewise unique in every treemacs buffer) * `parent` - The parent shadow-node, if any. * `children` - The immediate child nodes, if any. * `position` - A marker to the node's position. The same object as `treemacs-current-btn` would return. Not all nodes have a position. When a node is collapsed while it has expanded children of its own it is not removed from the shadow-tree. Instead its position is deleted and it is marked as closed. This way when the node is reopened its children will be reopened as well. * `closed` - Marker for closed nodes. See above. * `refresh-flag` - Marker for directories in need of being refreshed. ### The filewatch callback setup With `filewatch-mode` enabled treemacs will put under watch all directories that it displays, calling treemacs' callback function for every change in the watched directories. The exact makeup of the callback event argument can be read up in the docs for `file-notify-add-watch`. When it receives an event treemacs first decides if the event is relevant at all. This means it will ignore changes to lock or backup files. A file being *changed* is only of interest when `treemacs-git-mode` is enabled. The decision is ultimately user-configurable by means of `treemacs-ignored-file-predicates`. When a file was deleted or renamed treemacs will also update the shadow-trees in every buffer. As a last step treemacs will set the refresh flag of the directory the change *happened in* and start a timer to start a refresh after a delay of `treemacs-file-event-delay` ms. ### Recursive Descent Refresh On a refresh treemacs will - for every buffer, for every project in the buffer - begin descending its shadow tree. If a shadow-node's refresh flag is not set the descent continues recursively. If it is then treemacs will first do the aequivalent of moving to the node's position and pressing TAB twice, so as to refresh that directory and all its children. Since nodes below the one that was just refreshed may have their refresh flags set as well the descent will continue to reset these flags. A refresh therefore requires a full shadow tree traversal. ## What shoud change ### Weaknesses of the current approach While a major improvement over the very first version of `filewatch-mode`, which would simply refresh the entire project, the current system lacks accuracy and is typically overkill. Not only does it still refresh full projects - if only a file at the root was changed - refreshing at the directory level is still doing too much work when only one, or a few files have changed. The most common reason for filewatch-mode to strike is probably saving - again - the one file you are currently editing. ### Fine grained changes It'd be much more appropriate to apply only the actual changes and do the least amount of work necessary. So, given a set of changes, treemacs should learn to apply them one by one, as the amount of work needed for this can, in most cases, be expected to be much smaller than a hard refresh of an entire directory, especially if that directory is close to the root and refreshing it also entails reopening all its children. The refresh-flag, currently used just a boolean, can be changed to instead hold a list of files that were changed and the type of the change for a given directory. Upon a refresh descent treemacs could then make a decision whether to apply the changes one by one or do a full directory refresh based on the count or makeup of the refresh list. Different changes must be applied in different ways, and some are certainly more difficult than others. Deleting a file is simpler than creating one since the created file needs to be displayed in the right position, which is particularly challenging due to `treemacs-resort`. All in all there's the following cases to consider: * [ ] Deletion of a file * [ ] Deletion of a directory * [ ] Creation of a new file or directory * [ ] Renaming of a file or directory * [ ] Change to a file, requirung querying its git status In case of `git-mode` new files must be fontified accordingly. ### Don't get stuck The last time I implemented a major change I worked on it on a long-term basis, wanting to push in one go. This had the consequence that my published code and my wip code diverged to the point that by the end of it I couldn't even publish my bug fixes and needed to tell people to wait for the new feature to be finished first. Let's not do that this time. No non-functional intermediate state this time, the changes should come in small steps and be able to go on master.
perf
granular filewatch updates first a description of the status quo the shadow tree every treemacs buffer possesses a backing data structure that is similar to a limited dom except that i call it the shadow tree since it shadows the visible file tree s specifically it mirrors the structure of expanded directories and tag groups every shadow node in that tree is a struct that is made up of the following fields key the node s unique key usually its absolute path this makes every node accessible in o time through the hash table called the shadow index likewise unique in every treemacs buffer parent the parent shadow node if any children the immediate child nodes if any position a marker to the node s position the same object as treemacs current btn would return not all nodes have a position when a node is collapsed while it has expanded children of its own it is not removed from the shadow tree instead its position is deleted and it is marked as closed this way when the node is reopened its children will be reopened as well closed marker for closed nodes see above refresh flag marker for directories in need of being refreshed the filewatch callback setup with filewatch mode enabled treemacs will put under watch all directories that it displays calling treemacs callback function for every change in the watched directories the exact makeup of the callback event argument can be read up in the docs for file notify add watch when it receives an event treemacs first decides if the event is relevant at all this means it will ignore changes to lock or backup files a file being changed is only of interest when treemacs git mode is enabled the decision is ultimately user configurable by means of treemacs ignored file predicates when a file was deleted or renamed treemacs will also update the shadow trees in every buffer as a last step treemacs will set the refresh flag of the directory the change happened in and start a timer to start a refresh after a delay of treemacs file event delay ms recursive descent refresh on a refresh treemacs will for every buffer for every project in the buffer begin descending its shadow tree if a shadow node s refresh flag is not set the descent continues recursively if it is then treemacs will first do the aequivalent of moving to the node s position and pressing tab twice so as to refresh that directory and all its children since nodes below the one that was just refreshed may have their refresh flags set as well the descent will continue to reset these flags a refresh therefore requires a full shadow tree traversal what shoud change weaknesses of the current approach while a major improvement over the very first version of filewatch mode which would simply refresh the entire project the current system lacks accuracy and is typically overkill not only does it still refresh full projects if only a file at the root was changed refreshing at the directory level is still doing too much work when only one or a few files have changed the most common reason for filewatch mode to strike is probably saving again the one file you are currently editing fine grained changes it d be much more appropriate to apply only the actual changes and do the least amount of work necessary so given a set of changes treemacs should learn to apply them one by one as the amount of work needed for this can in most cases be expected to be much smaller than a hard refresh of an entire directory especially if that directory is close to the root and refreshing it also entails reopening all its children the refresh flag currently used just a boolean can be changed to instead hold a list of files that were changed and the type of the change for a given directory upon a refresh descent treemacs could then make a decision whether to apply the changes one by one or do a full directory refresh based on the count or makeup of the refresh list different changes must be applied in different ways and some are certainly more difficult than others deleting a file is simpler than creating one since the created file needs to be displayed in the right position which is particularly challenging due to treemacs resort all in all there s the following cases to consider deletion of a file deletion of a directory creation of a new file or directory renaming of a file or directory change to a file requirung querying its git status in case of git mode new files must be fontified accordingly don t get stuck the last time i implemented a major change i worked on it on a long term basis wanting to push in one go this had the consequence that my published code and my wip code diverged to the point that by the end of it i couldn t even publish my bug fixes and needed to tell people to wait for the new feature to be finished first let s not do that this time no non functional intermediate state this time the changes should come in small steps and be able to go on master
1
7,367
6,007,008,016
IssuesEvent
2017-06-06 01:13:14
explosion/spaCy
https://api.github.com/repos/explosion/spaCy
closed
Wrong lemmatisation of the verb 'Hope'
language / english performance
Hey Matthew, great fan of yours! I just found out a bug in the lemmatisation of the verb 'hope' (any tense). `doc = nlp(u'I hoped for the best')[1].lemma_ -> u'hop'` Is there any way to fix it? Thanks in advance!
True
Wrong lemmatisation of the verb 'Hope' - Hey Matthew, great fan of yours! I just found out a bug in the lemmatisation of the verb 'hope' (any tense). `doc = nlp(u'I hoped for the best')[1].lemma_ -> u'hop'` Is there any way to fix it? Thanks in advance!
perf
wrong lemmatisation of the verb hope hey matthew great fan of yours i just found out a bug in the lemmatisation of the verb hope any tense doc nlp u i hoped for the best lemma u hop is there any way to fix it thanks in advance
1
52,245
13,731,536,525
IssuesEvent
2020-10-05 01:24:36
doc-ai/nlp.js
https://api.github.com/repos/doc-ai/nlp.js
opened
CVE-2019-20920 (High) detected in handlebars-4.1.2.tgz
security vulnerability
## CVE-2019-20920 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.1.2.tgz</b></p></summary> <p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p> <p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.1.2.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.1.2.tgz</a></p> <p>Path to dependency file: /nlp.js/package.json</p> <p>Path to vulnerable library: nlp.js/node_modules/handlebars/package.json</p> <p> Dependency Hierarchy: - :x: **handlebars-4.1.2.tgz** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Handlebars before 3.0.8 and 4.x before 4.5.3 is vulnerable to Arbitrary Code Execution. The lookup helper fails to properly validate templates, allowing attackers to submit templates that execute arbitrary JavaScript. This can be used to run arbitrary code on a server processing Handlebars templates or in a victim's browser (effectively serving as XSS). <p>Publish Date: 2020-09-30 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-20920>CVE-2019-20920</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Changed - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: Low - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-20920">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-20920</a></p> <p>Release Date: 2020-09-30</p> <p>Fix Resolution: v3.0.8, v4.5.3</p> </p> </details> <p></p> *** <!-- REMEDIATE-OPEN-PR-START --> - [ ] Check this box to open an automated fix PR <!-- REMEDIATE-OPEN-PR-END --> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"handlebars","packageVersion":"4.1.2","isTransitiveDependency":false,"dependencyTree":"handlebars:4.1.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"v3.0.8, v4.5.3"}],"vulnerabilityIdentifier":"CVE-2019-20920","vulnerabilityDetails":"Handlebars before 3.0.8 and 4.x before 4.5.3 is vulnerable to Arbitrary Code Execution. The lookup helper fails to properly validate templates, allowing attackers to submit templates that execute arbitrary JavaScript. This can be used to run arbitrary code on a server processing Handlebars templates or in a victim\u0027s browser (effectively serving as XSS).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-20920","cvss3Severity":"high","cvss3Score":"8.1","cvss3Metrics":{"A":"Low","AC":"High","PR":"None","S":"Changed","C":"High","UI":"None","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
True
CVE-2019-20920 (High) detected in handlebars-4.1.2.tgz - ## CVE-2019-20920 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.1.2.tgz</b></p></summary> <p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p> <p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.1.2.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.1.2.tgz</a></p> <p>Path to dependency file: /nlp.js/package.json</p> <p>Path to vulnerable library: nlp.js/node_modules/handlebars/package.json</p> <p> Dependency Hierarchy: - :x: **handlebars-4.1.2.tgz** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Handlebars before 3.0.8 and 4.x before 4.5.3 is vulnerable to Arbitrary Code Execution. The lookup helper fails to properly validate templates, allowing attackers to submit templates that execute arbitrary JavaScript. This can be used to run arbitrary code on a server processing Handlebars templates or in a victim's browser (effectively serving as XSS). <p>Publish Date: 2020-09-30 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-20920>CVE-2019-20920</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Changed - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: Low - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-20920">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-20920</a></p> <p>Release Date: 2020-09-30</p> <p>Fix Resolution: v3.0.8, v4.5.3</p> </p> </details> <p></p> *** <!-- REMEDIATE-OPEN-PR-START --> - [ ] Check this box to open an automated fix PR <!-- REMEDIATE-OPEN-PR-END --> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"handlebars","packageVersion":"4.1.2","isTransitiveDependency":false,"dependencyTree":"handlebars:4.1.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"v3.0.8, v4.5.3"}],"vulnerabilityIdentifier":"CVE-2019-20920","vulnerabilityDetails":"Handlebars before 3.0.8 and 4.x before 4.5.3 is vulnerable to Arbitrary Code Execution. The lookup helper fails to properly validate templates, allowing attackers to submit templates that execute arbitrary JavaScript. This can be used to run arbitrary code on a server processing Handlebars templates or in a victim\u0027s browser (effectively serving as XSS).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-20920","cvss3Severity":"high","cvss3Score":"8.1","cvss3Metrics":{"A":"Low","AC":"High","PR":"None","S":"Changed","C":"High","UI":"None","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
non_perf
cve high detected in handlebars tgz cve high severity vulnerability vulnerable library handlebars tgz handlebars provides the power necessary to let you build semantic templates effectively with no frustration library home page a href path to dependency file nlp js package json path to vulnerable library nlp js node modules handlebars package json dependency hierarchy x handlebars tgz vulnerable library vulnerability details handlebars before and x before is vulnerable to arbitrary code execution the lookup helper fails to properly validate templates allowing attackers to submit templates that execute arbitrary javascript this can be used to run arbitrary code on a server processing handlebars templates or in a victim s browser effectively serving as xss publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope changed impact metrics confidentiality impact high integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution check this box to open an automated fix pr isopenpronvulnerability false ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails handlebars before and x before is vulnerable to arbitrary code execution the lookup helper fails to properly validate templates allowing attackers to submit templates that execute arbitrary javascript this can be used to run arbitrary code on a server processing handlebars templates or in a victim browser effectively serving as xss vulnerabilityurl
0
13,572
8,278,939,279
IssuesEvent
2018-09-18 00:12:41
select2/select2
https://api.github.com/repos/select2/select2
closed
How to add Delay to keydown event ?
feature: search performance
Select2 is slow when typing keywords if the select has a fair number of items. Maybe everytime keydown event runs searching. I want to insert like ```setTimeout```
True
How to add Delay to keydown event ? - Select2 is slow when typing keywords if the select has a fair number of items. Maybe everytime keydown event runs searching. I want to insert like ```setTimeout```
perf
how to add delay to keydown event is slow when typing keywords if the select has a fair number of items maybe everytime keydown event runs searching i want to insert like settimeout
1
209,666
23,730,737,668
IssuesEvent
2022-08-31 01:18:21
samqws-marketing/walmartlabs-concord
https://api.github.com/repos/samqws-marketing/walmartlabs-concord
opened
CVE-2022-25857 (High) detected in snakeyaml-1.27.jar
security vulnerability
## CVE-2022-25857 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>snakeyaml-1.27.jar</b></p></summary> <p>YAML 1.1 parser and emitter for Java</p> <p>Library home page: <a href="http://www.snakeyaml.org">http://www.snakeyaml.org</a></p> <p>Path to dependency file: /server/plugins/noderoster/db/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.27/snakeyaml-1.27.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.27/snakeyaml-1.27.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.27/snakeyaml-1.27.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.27/snakeyaml-1.27.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.27/snakeyaml-1.27.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.27/snakeyaml-1.27.jar</p> <p> Dependency Hierarchy: - liquibase-core-3.5.1.jar (Root Library) - :x: **snakeyaml-1.27.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/samqws-marketing/walmartlabs-concord/commit/b9420f3b9e73a9d381266ece72f7afb756f35a76">b9420f3b9e73a9d381266ece72f7afb756f35a76</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The package org.yaml:snakeyaml from 0 and before 1.31 are vulnerable to Denial of Service (DoS) due missing to nested depth limitation for collections. <p>Publish Date: 2022-08-30 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-25857>CVE-2022-25857</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-25857">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-25857</a></p> <p>Release Date: 2022-08-30</p> <p>Fix Resolution: org.yaml:snakeyaml:1.31</p> </p> </details> <p></p>
True
CVE-2022-25857 (High) detected in snakeyaml-1.27.jar - ## CVE-2022-25857 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>snakeyaml-1.27.jar</b></p></summary> <p>YAML 1.1 parser and emitter for Java</p> <p>Library home page: <a href="http://www.snakeyaml.org">http://www.snakeyaml.org</a></p> <p>Path to dependency file: /server/plugins/noderoster/db/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.27/snakeyaml-1.27.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.27/snakeyaml-1.27.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.27/snakeyaml-1.27.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.27/snakeyaml-1.27.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.27/snakeyaml-1.27.jar,/home/wss-scanner/.m2/repository/org/yaml/snakeyaml/1.27/snakeyaml-1.27.jar</p> <p> Dependency Hierarchy: - liquibase-core-3.5.1.jar (Root Library) - :x: **snakeyaml-1.27.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/samqws-marketing/walmartlabs-concord/commit/b9420f3b9e73a9d381266ece72f7afb756f35a76">b9420f3b9e73a9d381266ece72f7afb756f35a76</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The package org.yaml:snakeyaml from 0 and before 1.31 are vulnerable to Denial of Service (DoS) due missing to nested depth limitation for collections. <p>Publish Date: 2022-08-30 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-25857>CVE-2022-25857</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-25857">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-25857</a></p> <p>Release Date: 2022-08-30</p> <p>Fix Resolution: org.yaml:snakeyaml:1.31</p> </p> </details> <p></p>
non_perf
cve high detected in snakeyaml jar cve high severity vulnerability vulnerable library snakeyaml jar yaml parser and emitter for java library home page a href path to dependency file server plugins noderoster db pom xml path to vulnerable library home wss scanner repository org yaml snakeyaml snakeyaml jar home wss scanner repository org yaml snakeyaml snakeyaml jar home wss scanner repository org yaml snakeyaml snakeyaml jar home wss scanner repository org yaml snakeyaml snakeyaml jar home wss scanner repository org yaml snakeyaml snakeyaml jar home wss scanner repository org yaml snakeyaml snakeyaml jar dependency hierarchy liquibase core jar root library x snakeyaml jar vulnerable library found in head commit a href found in base branch master vulnerability details the package org yaml snakeyaml from and before are vulnerable to denial of service dos due missing to nested depth limitation for collections publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org yaml snakeyaml
0
17,357
9,740,299,752
IssuesEvent
2019-06-01 19:05:55
xamarin/Xamarin.Forms
https://api.github.com/repos/xamarin/Xamarin.Forms
closed
[Bug] Memory leak in Xamarin forms if Android Activity is killed but process is not
a/performance p/Android s/unverified t/bug :bug:
<!-- Bug report best practices: https://github.com/xamarin/Xamarin.Forms/wiki/Submitting-Issues --> ### Description There is an issue that I believe is related to [this](https://forums.xamarin.com/discussion/90940/improve-how-forms-handles-activity-restarts-on-android) which causes leaking memory. But I have found another scenario where this seems to be pretty bad. It is related to objects not being destroyed when a main activity is stopped but the owning process is not. ### Steps to Reproduce 1. Create a Xamarin forms app 2. Create a foreground service for android 3. Deploy the app as debug and make sure the service starts 4. Open recent apps and swipe the app to kill it. The Visual Studio debugger remains connected and the process doesn't die, but the Main Activity seems to be killed as well as the service (it's no longer in developer options -> running services) 5. Reopen the app. Upon reopening the app, the Main Activity goes through its full creation (OnCreate() is called), recreating a new instances of the Forms App class. Therefore it creates the main form again. Any objects created in that form are now duplicates, but the first were never destroyed. This can be shown by adding an object to the form with a static field that is an integer counter that is incremented in the class' constructor. See attached project for repro. The service is merely an easy way I've found to reproduce this issue consistently. We started throwing an exception any time we saw an object created twice, and logging those reports to HockeyApp. There are times an app, even without a service, seems to have its activity go out of memory but the process not be killed off. Upon reopening, the exception is thrown. ### Expected Behavior Lifetime of the forms should be coupled to the lifetime of the main activity, or have some way to get a reference to forms still in memory when the main activity is recreated if they exist. ### Actual Behavior Forms seem to hang around in memory when the main activity is killed but the process is not ### Basic Information - Version with issue: - Last known good version: N/A - IDE: Visual Studio 15.9.11 - Platform Target Frameworks: <!-- All that apply --> - Android: 9.0 Pie - Nuget Packages: - Affected Devices: S8, probably others ### Screenshots <!-- If the issue is a visual issue, please include screenshots showing the problem if possible --> ### Reproduction Link [Bug.zip](https://github.com/xamarin/Xamarin.Forms/files/3194495/Bug.zip)
True
[Bug] Memory leak in Xamarin forms if Android Activity is killed but process is not - <!-- Bug report best practices: https://github.com/xamarin/Xamarin.Forms/wiki/Submitting-Issues --> ### Description There is an issue that I believe is related to [this](https://forums.xamarin.com/discussion/90940/improve-how-forms-handles-activity-restarts-on-android) which causes leaking memory. But I have found another scenario where this seems to be pretty bad. It is related to objects not being destroyed when a main activity is stopped but the owning process is not. ### Steps to Reproduce 1. Create a Xamarin forms app 2. Create a foreground service for android 3. Deploy the app as debug and make sure the service starts 4. Open recent apps and swipe the app to kill it. The Visual Studio debugger remains connected and the process doesn't die, but the Main Activity seems to be killed as well as the service (it's no longer in developer options -> running services) 5. Reopen the app. Upon reopening the app, the Main Activity goes through its full creation (OnCreate() is called), recreating a new instances of the Forms App class. Therefore it creates the main form again. Any objects created in that form are now duplicates, but the first were never destroyed. This can be shown by adding an object to the form with a static field that is an integer counter that is incremented in the class' constructor. See attached project for repro. The service is merely an easy way I've found to reproduce this issue consistently. We started throwing an exception any time we saw an object created twice, and logging those reports to HockeyApp. There are times an app, even without a service, seems to have its activity go out of memory but the process not be killed off. Upon reopening, the exception is thrown. ### Expected Behavior Lifetime of the forms should be coupled to the lifetime of the main activity, or have some way to get a reference to forms still in memory when the main activity is recreated if they exist. ### Actual Behavior Forms seem to hang around in memory when the main activity is killed but the process is not ### Basic Information - Version with issue: - Last known good version: N/A - IDE: Visual Studio 15.9.11 - Platform Target Frameworks: <!-- All that apply --> - Android: 9.0 Pie - Nuget Packages: - Affected Devices: S8, probably others ### Screenshots <!-- If the issue is a visual issue, please include screenshots showing the problem if possible --> ### Reproduction Link [Bug.zip](https://github.com/xamarin/Xamarin.Forms/files/3194495/Bug.zip)
perf
memory leak in xamarin forms if android activity is killed but process is not description there is an issue that i believe is related to which causes leaking memory but i have found another scenario where this seems to be pretty bad it is related to objects not being destroyed when a main activity is stopped but the owning process is not steps to reproduce create a xamarin forms app create a foreground service for android deploy the app as debug and make sure the service starts open recent apps and swipe the app to kill it the visual studio debugger remains connected and the process doesn t die but the main activity seems to be killed as well as the service it s no longer in developer options running services reopen the app upon reopening the app the main activity goes through its full creation oncreate is called recreating a new instances of the forms app class therefore it creates the main form again any objects created in that form are now duplicates but the first were never destroyed this can be shown by adding an object to the form with a static field that is an integer counter that is incremented in the class constructor see attached project for repro the service is merely an easy way i ve found to reproduce this issue consistently we started throwing an exception any time we saw an object created twice and logging those reports to hockeyapp there are times an app even without a service seems to have its activity go out of memory but the process not be killed off upon reopening the exception is thrown expected behavior lifetime of the forms should be coupled to the lifetime of the main activity or have some way to get a reference to forms still in memory when the main activity is recreated if they exist actual behavior forms seem to hang around in memory when the main activity is killed but the process is not basic information version with issue last known good version n a ide visual studio platform target frameworks android pie nuget packages affected devices probably others screenshots reproduction link
1
10,927
7,347,504,835
IssuesEvent
2018-03-08 01:58:38
orbeon/orbeon-forms
https://api.github.com/repos/orbeon/orbeon-forms
closed
fr:wizard slow to keep list of top-level sections with errors
Form Runner Performance XBL Components
A lot of time is spent in `fr-update-section-status`, which is called upon `xforms-enabled`, `xxforms-constraints-changed`, `xforms-disabled`. With a large form (#3442), 19% of the load time is spent in `gatherSectionStatusJava`, mostly in `topLevelSectionsWithErrors`. And the reason is that for each control in the UI (close to 2000), the entire list of errors must be checked, and there can be about 1000 errors in the list. So we must find a better way to keep the information about the top-level sections with errors.
True
fr:wizard slow to keep list of top-level sections with errors - A lot of time is spent in `fr-update-section-status`, which is called upon `xforms-enabled`, `xxforms-constraints-changed`, `xforms-disabled`. With a large form (#3442), 19% of the load time is spent in `gatherSectionStatusJava`, mostly in `topLevelSectionsWithErrors`. And the reason is that for each control in the UI (close to 2000), the entire list of errors must be checked, and there can be about 1000 errors in the list. So we must find a better way to keep the information about the top-level sections with errors.
perf
fr wizard slow to keep list of top level sections with errors a lot of time is spent in fr update section status which is called upon xforms enabled xxforms constraints changed xforms disabled with a large form of the load time is spent in gathersectionstatusjava mostly in toplevelsectionswitherrors and the reason is that for each control in the ui close to the entire list of errors must be checked and there can be about errors in the list so we must find a better way to keep the information about the top level sections with errors
1
32,933
8,971,527,532
IssuesEvent
2019-01-29 16:06:25
avast-tl/retdec
https://api.github.com/repos/avast-tl/retdec
closed
CMake rules for pelib contain an unitialized variable
C-build-system C-pelib bug
File `deps/pelib/CMakeLists.txt` contains the following piece of code: ``` 44 # Force rebuild if switch happened. 45 # Seems like this is not needed on Linux, and not working on Windows :-( 46 BUILD_ALWAYS ${CHANGED} ``` However, the `CHANGED` variable is defined later on line ``` 57 check_if_variable_changed(PELIB_LOCAL_DIR CHANGED) ``` Questions: * Can you please verify that we actually want to use an uninitialized variable there? * Is that `BUILD_ALWAYS` part necessary? According to the comment above, it is not needed on Linux and does not work on Windows. Is it for macOS then?
1.0
CMake rules for pelib contain an unitialized variable - File `deps/pelib/CMakeLists.txt` contains the following piece of code: ``` 44 # Force rebuild if switch happened. 45 # Seems like this is not needed on Linux, and not working on Windows :-( 46 BUILD_ALWAYS ${CHANGED} ``` However, the `CHANGED` variable is defined later on line ``` 57 check_if_variable_changed(PELIB_LOCAL_DIR CHANGED) ``` Questions: * Can you please verify that we actually want to use an uninitialized variable there? * Is that `BUILD_ALWAYS` part necessary? According to the comment above, it is not needed on Linux and does not work on Windows. Is it for macOS then?
non_perf
cmake rules for pelib contain an unitialized variable file deps pelib cmakelists txt contains the following piece of code force rebuild if switch happened seems like this is not needed on linux and not working on windows build always changed however the changed variable is defined later on line check if variable changed pelib local dir changed questions can you please verify that we actually want to use an uninitialized variable there is that build always part necessary according to the comment above it is not needed on linux and does not work on windows is it for macos then
0
21,850
11,410,755,357
IssuesEvent
2020-02-01 00:45:41
microsoft/STL
https://api.github.com/repos/microsoft/STL
closed
<string>: basic_string's operator+ does strlen on the input one too many times
enhancement performance
While reviewing #419 I noticed something strange about how we handle `operator+`. Consider the following: https://github.com/microsoft/STL/blob/c5cde6ecbaa661fd23ad085e9e716e843059b104/stl/inc/xstring#L4279-L4289 Here, we do the obvious `strlen` (`_Traits::length`) on line 4285 to figure out the length of the `const _Elem*` parameter. However, then we call `operator+=(const _Elem*)`, which is going to do *another* `_Traits::length` on that input. This should be reworked to call `_Traits::length` only once, and ideally call an internal-only helper version of append that avoids creating extra exception handling states (since we know the appends here cannot reallocate, but the compiler does not know that).
True
<string>: basic_string's operator+ does strlen on the input one too many times - While reviewing #419 I noticed something strange about how we handle `operator+`. Consider the following: https://github.com/microsoft/STL/blob/c5cde6ecbaa661fd23ad085e9e716e843059b104/stl/inc/xstring#L4279-L4289 Here, we do the obvious `strlen` (`_Traits::length`) on line 4285 to figure out the length of the `const _Elem*` parameter. However, then we call `operator+=(const _Elem*)`, which is going to do *another* `_Traits::length` on that input. This should be reworked to call `_Traits::length` only once, and ideally call an internal-only helper version of append that avoids creating extra exception handling states (since we know the appends here cannot reallocate, but the compiler does not know that).
perf
basic string s operator does strlen on the input one too many times while reviewing i noticed something strange about how we handle operator consider the following here we do the obvious strlen traits length on line to figure out the length of the const elem parameter however then we call operator const elem which is going to do another traits length on that input this should be reworked to call traits length only once and ideally call an internal only helper version of append that avoids creating extra exception handling states since we know the appends here cannot reallocate but the compiler does not know that
1
115,407
24,760,469,546
IssuesEvent
2022-10-21 23:07:27
WordPress/openverse-catalog
https://api.github.com/repos/WordPress/openverse-catalog
opened
Capture thumbnails for Rawpixel
🟨 priority: medium ✨ goal: improvement 💻 aspect: code ⛔ status: blocked
## Description <!-- Describe the feature and how it solves the problem. --> For context, see the refactor PR: https://github.com/WordPress/openverse-catalog/pull/795#discussion_r1002145671 Presently we do not have a well-defined field for thumbnails. We will going to be deciding where thumbnails will go in a discussion on our [Make WP blog](https://make.wordpress.org/openverse/), but until we do it's best to exclude this field for now. A potential implementation is available in the above linked PR. ## Additional context <!-- Add any other context about the feature here; or delete the section entirely. --> #698, https://github.com/WordPress/openverse-catalog/pull/796#issuecomment-1287422102 ## Implementation <!-- Replace the [ ] with [x] to check the box. --> - [ ] 🙋 I would be interested in implementing this feature.
1.0
Capture thumbnails for Rawpixel - ## Description <!-- Describe the feature and how it solves the problem. --> For context, see the refactor PR: https://github.com/WordPress/openverse-catalog/pull/795#discussion_r1002145671 Presently we do not have a well-defined field for thumbnails. We will going to be deciding where thumbnails will go in a discussion on our [Make WP blog](https://make.wordpress.org/openverse/), but until we do it's best to exclude this field for now. A potential implementation is available in the above linked PR. ## Additional context <!-- Add any other context about the feature here; or delete the section entirely. --> #698, https://github.com/WordPress/openverse-catalog/pull/796#issuecomment-1287422102 ## Implementation <!-- Replace the [ ] with [x] to check the box. --> - [ ] 🙋 I would be interested in implementing this feature.
non_perf
capture thumbnails for rawpixel description for context see the refactor pr presently we do not have a well defined field for thumbnails we will going to be deciding where thumbnails will go in a discussion on our but until we do it s best to exclude this field for now a potential implementation is available in the above linked pr additional context implementation 🙋 i would be interested in implementing this feature
0
41,664
21,875,782,180
IssuesEvent
2022-05-19 09:59:57
playcanvas/engine
https://api.github.com/repos/playcanvas/engine
closed
Consider OVR_multiview2 extension to speed up multiview rendering
enhancement performance area: graphics area: xr
The OVR_multiview2 extension is part of the [WebGL API](https://developer.mozilla.org/en-US/docs/Web/API/WebGL_API) and adds support for rendering into multiple views simultaneously. This especially useful for virtual reality (VR) and WebXR. https://developer.mozilla.org/en-US/docs/Web/API/OVR_multiview2 https://community.arm.com/arm-community-blogs/b/graphics-gaming-and-vr-blog/posts/optimizing-virtual-reality-understanding-multiview https://blog.mozvr.com/multiview-on-webxr/
True
Consider OVR_multiview2 extension to speed up multiview rendering - The OVR_multiview2 extension is part of the [WebGL API](https://developer.mozilla.org/en-US/docs/Web/API/WebGL_API) and adds support for rendering into multiple views simultaneously. This especially useful for virtual reality (VR) and WebXR. https://developer.mozilla.org/en-US/docs/Web/API/OVR_multiview2 https://community.arm.com/arm-community-blogs/b/graphics-gaming-and-vr-blog/posts/optimizing-virtual-reality-understanding-multiview https://blog.mozvr.com/multiview-on-webxr/
perf
consider ovr extension to speed up multiview rendering the ovr extension is part of the and adds support for rendering into multiple views simultaneously this especially useful for virtual reality vr and webxr
1
33,327
15,875,566,658
IssuesEvent
2021-04-09 07:10:48
pytorch/pytorch
https://api.github.com/repos/pytorch/pytorch
closed
c++ tensor::save is about 5x slower to overwrite the existing tensor.
module: cpp module: performance triaged
I found that it is slow to overwrite existing tensor using tensor::save. ## Code ``` #include <chrono> #include <filesystem> #include <torch/torch.h> uint64_t now_ms() { return static_cast<uint64_t>( std::chrono::time_point_cast<std::chrono::milliseconds>( std::chrono::steady_clock::now()) .time_since_epoch() .count()); } int main(int argc, char **argv) { auto tensor = torch::randn({1, 200 * 1024}); auto begin_ms =now_ms(); for (int i = 0; i < 100; i++) { torch::save(tensor, "tmp_file"); } auto end_ms = now_ms(); std::cout << "insertion used " << end_ms - begin_ms << " ms" << std::endl; begin_ms = now_ms(); for (int i = 0; i < 100; i++) { std::filesystem::remove("tmp_file"); torch::save(tensor, "tmp_file"); } end_ms = now_ms(); std::cout << "insertion used " << end_ms - begin_ms << " ms" << std::endl; return 0; } ``` ## result insertion used 557 ms insertion used 82 ms ## Strace results % time seconds usecs/call calls errors syscall ------ ----------- ----------- --------- --------- ---------------- 54.03 0.051799 517 100 writev 25.79 0.024721 103 238 92 openat 16.86 0.016168 110 146 close % time seconds usecs/call calls errors syscall ------ ----------- ----------- --------- --------- ---------------- 61.46 0.020940 209 100 writev 19.27 0.006565 65 100 unlink 6.30 0.002147 9 238 92 openat 3.76 0.001281 9 137 mmap 2.01 0.000684 4 146 close The only significant difference between these two blocks is that writev syscall is twice slower in the first case. ## Expectation Overwrite should be as fast as removing the old tensor manually ## Environment Ubuntu 20.04 pytorch version is github master cc @yf225 @glaringlee @VitalyFedyunin @ngimel
True
c++ tensor::save is about 5x slower to overwrite the existing tensor. - I found that it is slow to overwrite existing tensor using tensor::save. ## Code ``` #include <chrono> #include <filesystem> #include <torch/torch.h> uint64_t now_ms() { return static_cast<uint64_t>( std::chrono::time_point_cast<std::chrono::milliseconds>( std::chrono::steady_clock::now()) .time_since_epoch() .count()); } int main(int argc, char **argv) { auto tensor = torch::randn({1, 200 * 1024}); auto begin_ms =now_ms(); for (int i = 0; i < 100; i++) { torch::save(tensor, "tmp_file"); } auto end_ms = now_ms(); std::cout << "insertion used " << end_ms - begin_ms << " ms" << std::endl; begin_ms = now_ms(); for (int i = 0; i < 100; i++) { std::filesystem::remove("tmp_file"); torch::save(tensor, "tmp_file"); } end_ms = now_ms(); std::cout << "insertion used " << end_ms - begin_ms << " ms" << std::endl; return 0; } ``` ## result insertion used 557 ms insertion used 82 ms ## Strace results % time seconds usecs/call calls errors syscall ------ ----------- ----------- --------- --------- ---------------- 54.03 0.051799 517 100 writev 25.79 0.024721 103 238 92 openat 16.86 0.016168 110 146 close % time seconds usecs/call calls errors syscall ------ ----------- ----------- --------- --------- ---------------- 61.46 0.020940 209 100 writev 19.27 0.006565 65 100 unlink 6.30 0.002147 9 238 92 openat 3.76 0.001281 9 137 mmap 2.01 0.000684 4 146 close The only significant difference between these two blocks is that writev syscall is twice slower in the first case. ## Expectation Overwrite should be as fast as removing the old tensor manually ## Environment Ubuntu 20.04 pytorch version is github master cc @yf225 @glaringlee @VitalyFedyunin @ngimel
perf
c tensor save is about slower to overwrite the existing tensor i found that it is slow to overwrite existing tensor using tensor save code include include include t now ms return static cast std chrono time point cast std chrono steady clock now time since epoch count int main int argc char argv auto tensor torch randn auto begin ms now ms for int i i i torch save tensor tmp file auto end ms now ms std cout insertion used end ms begin ms ms std endl begin ms now ms for int i i i std filesystem remove tmp file torch save tensor tmp file end ms now ms std cout insertion used end ms begin ms ms std endl return result insertion used ms insertion used ms strace results time seconds usecs call calls errors syscall writev openat close time seconds usecs call calls errors syscall writev unlink openat mmap close the only significant difference between these two blocks is that writev syscall is twice slower in the first case expectation overwrite should be as fast as removing the old tensor manually environment ubuntu pytorch version is github master cc glaringlee vitalyfedyunin ngimel
1
51,917
27,303,150,243
IssuesEvent
2023-02-24 05:03:51
Azure/azure-storage-fuse
https://api.github.com/repos/Azure/azure-storage-fuse
closed
Blobfuse v1 vs v2 performance
V2 performance awaiting-customer-response
### Which version of blobfuse was used? 1.4.5 and 2.0.1 (2.0.0.preview.4) ### Which OS distribution and version are you using? Oracle Linux 8.4 ### If relevant, please share your mount command. Mounted on pod using AKS 1.24.3, CSI driver 1.18.0. ### What was the issue encountered? Based on some read and write testing using a small Java application, it seems that Blobfuse 1.4.5 is slightly faster than 2.0.x, which is not what the readme claims. It should be the opposite. ### Have you found a mitigation/solution? No ### Please share logs if available. <html xmlns:v="urn:schemas-microsoft-com:vml" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:x="urn:schemas-microsoft-com:office:excel" xmlns="http://www.w3.org/TR/REC-html40">   | blobfuse1 | blobfuse2 | cifs -- | -- | -- | -- avg 8KB write (ms) | 15,944 | 25,732 | 60,362 avg 8KB read (ms) | 0,082 | 7,342 | 21,154 avg 1MB write (ms) | 75,284 | 88,814 | 110,066 avg 1MB read (ms) | 5,222 | 20,14 | 46,662 avg 10MB write (ms) | 220,666 | 249,016 | 261,364 avg 10MB read (ms) | 38,082 | 81,12 | 153,078 avg 100MB write (ms) | 1598,26 | 2488,62 | 2169,24 avg 100MB read (ms) | 349,36 | 566,06 | 1238,8 avg 1GB write (ms) | 16417,26 | 14706,82 | 27159,02 avg 1GB read (ms) | 4212,82 | 5268,98 | 21890,68 </body> </html>
True
Blobfuse v1 vs v2 performance - ### Which version of blobfuse was used? 1.4.5 and 2.0.1 (2.0.0.preview.4) ### Which OS distribution and version are you using? Oracle Linux 8.4 ### If relevant, please share your mount command. Mounted on pod using AKS 1.24.3, CSI driver 1.18.0. ### What was the issue encountered? Based on some read and write testing using a small Java application, it seems that Blobfuse 1.4.5 is slightly faster than 2.0.x, which is not what the readme claims. It should be the opposite. ### Have you found a mitigation/solution? No ### Please share logs if available. <html xmlns:v="urn:schemas-microsoft-com:vml" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:x="urn:schemas-microsoft-com:office:excel" xmlns="http://www.w3.org/TR/REC-html40">   | blobfuse1 | blobfuse2 | cifs -- | -- | -- | -- avg 8KB write (ms) | 15,944 | 25,732 | 60,362 avg 8KB read (ms) | 0,082 | 7,342 | 21,154 avg 1MB write (ms) | 75,284 | 88,814 | 110,066 avg 1MB read (ms) | 5,222 | 20,14 | 46,662 avg 10MB write (ms) | 220,666 | 249,016 | 261,364 avg 10MB read (ms) | 38,082 | 81,12 | 153,078 avg 100MB write (ms) | 1598,26 | 2488,62 | 2169,24 avg 100MB read (ms) | 349,36 | 566,06 | 1238,8 avg 1GB write (ms) | 16417,26 | 14706,82 | 27159,02 avg 1GB read (ms) | 4212,82 | 5268,98 | 21890,68 </body> </html>
perf
blobfuse vs performance which version of blobfuse was used and preview which os distribution and version are you using oracle linux if relevant please share your mount command mounted on pod using aks csi driver what was the issue encountered based on some read and write testing using a small java application it seems that blobfuse is slightly faster than x which is not what the readme claims it should be the opposite have you found a mitigation solution no please share logs if available html xmlns v urn schemas microsoft com vml xmlns o urn schemas microsoft com office office xmlns x urn schemas microsoft com office excel xmlns   cifs avg write ms avg read ms avg write ms avg read ms avg write ms avg read ms avg write ms avg read ms avg write ms avg read ms
1
499,462
14,447,684,186
IssuesEvent
2020-12-08 04:26:13
JuezUN/INGInious
https://api.github.com/repos/JuezUN/INGInious
closed
Changes on subproblems tab
Change request Frontend Medium Priority Plugins Task
- [x] Allow languages in multilang depending on grading environment, that is, allowing VHDL and Verilog for VHDL environment, python for Data Science and the others for multilang. - [x] Restrict the creation of subproblems to maximum 1. - [x] Show multiple languages in alphabetic order.
1.0
Changes on subproblems tab - - [x] Allow languages in multilang depending on grading environment, that is, allowing VHDL and Verilog for VHDL environment, python for Data Science and the others for multilang. - [x] Restrict the creation of subproblems to maximum 1. - [x] Show multiple languages in alphabetic order.
non_perf
changes on subproblems tab allow languages in multilang depending on grading environment that is allowing vhdl and verilog for vhdl environment python for data science and the others for multilang restrict the creation of subproblems to maximum show multiple languages in alphabetic order
0
6,830
6,624,438,912
IssuesEvent
2017-09-22 11:40:07
SatelliteQE/robottelo
https://api.github.com/repos/SatelliteQE/robottelo
closed
Optimize Docker Host infrastructure
6.2 6.3 API CLI High Infrastructure RFE test-failure UI
Painpoints: - randomly choosing what type of connection will be tested (unix vs. tcp) - CV published images with local docker (unix or tcp) requires self-registration but with multiple testing workers it severely interfere with other testing - local docker (unix or tcp) has issue with SELinux as foreman-selinux is not compatible with docker-selinux so lets avoid using local docker for tcp at least (always external tcp vs. local unix socket testing) - unix socket docker has outstanding bug - 400 malformed header pending (due to outdated ```excon``` gem) Remedies: - separate unix socket vs. tcp docker host tests (by introducing ```UnixSocketDockerTestCase```) - avoid using unix socket docker with CV published images (as you have to register satellite to itself) - externalize tcp docker host to avoid registration to itself (resolves #4401) - external tcp docker will utilize these CV published images testing - extrenal tcp dockers will be provided the similar way as are client VMs using context manager ```DockerHostMachine``` bound to libvirt image of preconfigured docker/host or better? atomic
1.0
Optimize Docker Host infrastructure - Painpoints: - randomly choosing what type of connection will be tested (unix vs. tcp) - CV published images with local docker (unix or tcp) requires self-registration but with multiple testing workers it severely interfere with other testing - local docker (unix or tcp) has issue with SELinux as foreman-selinux is not compatible with docker-selinux so lets avoid using local docker for tcp at least (always external tcp vs. local unix socket testing) - unix socket docker has outstanding bug - 400 malformed header pending (due to outdated ```excon``` gem) Remedies: - separate unix socket vs. tcp docker host tests (by introducing ```UnixSocketDockerTestCase```) - avoid using unix socket docker with CV published images (as you have to register satellite to itself) - externalize tcp docker host to avoid registration to itself (resolves #4401) - external tcp docker will utilize these CV published images testing - extrenal tcp dockers will be provided the similar way as are client VMs using context manager ```DockerHostMachine``` bound to libvirt image of preconfigured docker/host or better? atomic
non_perf
optimize docker host infrastructure painpoints randomly choosing what type of connection will be tested unix vs tcp cv published images with local docker unix or tcp requires self registration but with multiple testing workers it severely interfere with other testing local docker unix or tcp has issue with selinux as foreman selinux is not compatible with docker selinux so lets avoid using local docker for tcp at least always external tcp vs local unix socket testing unix socket docker has outstanding bug malformed header pending due to outdated excon gem remedies separate unix socket vs tcp docker host tests by introducing unixsocketdockertestcase avoid using unix socket docker with cv published images as you have to register satellite to itself externalize tcp docker host to avoid registration to itself resolves external tcp docker will utilize these cv published images testing extrenal tcp dockers will be provided the similar way as are client vms using context manager dockerhostmachine bound to libvirt image of preconfigured docker host or better atomic
0
86,756
10,516,372,544
IssuesEvent
2019-09-28 17:03:50
arthurpaulino/miraiml
https://api.github.com/repos/arthurpaulino/miraiml
closed
extract_model documentation
approved documentation
Mention that the extracted model does not output predictions as the engine does because it does not do OOF ensembles for each base model. Also, mention that it returns `None` if the engine hasn't completed at least one cycle.
1.0
extract_model documentation - Mention that the extracted model does not output predictions as the engine does because it does not do OOF ensembles for each base model. Also, mention that it returns `None` if the engine hasn't completed at least one cycle.
non_perf
extract model documentation mention that the extracted model does not output predictions as the engine does because it does not do oof ensembles for each base model also mention that it returns none if the engine hasn t completed at least one cycle
0
8,855
6,668,756,626
IssuesEvent
2017-10-03 16:52:36
typelead/eta
https://api.github.com/repos/typelead/eta
closed
Make case evaluation more efficient in certain cases
performance
```haskell case x of Nothing -> branch1 Just x -> branch2 ``` generates as: ```haskell Closure result = x.evaluate(context); DataCon result = (DataCon) result; int tag = result.getTag(); if (tag == 1) { branch1.enter(context); } else { branch2.enter(context); } ``` We can remove the `getTag()` call altogether because we *know* that the result must be either a Nothing or a Just! The new code: ```haskell Closure result = x.evaluate(context); if (result instanceof NothingD) { branch1.enter(context); } else { branch2.enter(context); } ``` This should yield a minor perf improvement. This optimisation applies whenever there are exactly two cases to match regardless of the number of constructors a type has. We can even extend this to up to 4 cases but we'll have to see how much of a benefit it will give us.
True
Make case evaluation more efficient in certain cases - ```haskell case x of Nothing -> branch1 Just x -> branch2 ``` generates as: ```haskell Closure result = x.evaluate(context); DataCon result = (DataCon) result; int tag = result.getTag(); if (tag == 1) { branch1.enter(context); } else { branch2.enter(context); } ``` We can remove the `getTag()` call altogether because we *know* that the result must be either a Nothing or a Just! The new code: ```haskell Closure result = x.evaluate(context); if (result instanceof NothingD) { branch1.enter(context); } else { branch2.enter(context); } ``` This should yield a minor perf improvement. This optimisation applies whenever there are exactly two cases to match regardless of the number of constructors a type has. We can even extend this to up to 4 cases but we'll have to see how much of a benefit it will give us.
perf
make case evaluation more efficient in certain cases haskell case x of nothing just x generates as haskell closure result x evaluate context datacon result datacon result int tag result gettag if tag enter context else enter context we can remove the gettag call altogether because we know that the result must be either a nothing or a just the new code haskell closure result x evaluate context if result instanceof nothingd enter context else enter context this should yield a minor perf improvement this optimisation applies whenever there are exactly two cases to match regardless of the number of constructors a type has we can even extend this to up to cases but we ll have to see how much of a benefit it will give us
1
275,042
30,188,406,863
IssuesEvent
2023-07-04 13:40:06
gabriel-milan/denoising-autoencoder
https://api.github.com/repos/gabriel-milan/denoising-autoencoder
opened
CVE-2022-41902 (Critical) detected in tensorflow-2.5.0-cp37-cp37m-manylinux2010_x86_64.whl
Mend: dependency security vulnerability
## CVE-2022-41902 - Critical Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tensorflow-2.5.0-cp37-cp37m-manylinux2010_x86_64.whl</b></p></summary> <p>TensorFlow is an open source machine learning framework for everyone.</p> <p>Library home page: <a href="https://files.pythonhosted.org/packages/aa/fd/993aa1333eb54d9f000863fe8ec61e41d12eb833dea51484c76c038718b5/tensorflow-2.5.0-cp37-cp37m-manylinux2010_x86_64.whl">https://files.pythonhosted.org/packages/aa/fd/993aa1333eb54d9f000863fe8ec61e41d12eb833dea51484c76c038718b5/tensorflow-2.5.0-cp37-cp37m-manylinux2010_x86_64.whl</a></p> <p>Path to dependency file: /training/requirements.txt</p> <p>Path to vulnerable library: /training/requirements.txt</p> <p> Dependency Hierarchy: - :x: **tensorflow-2.5.0-cp37-cp37m-manylinux2010_x86_64.whl** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/gabriel-milan/denoising-autoencoder/commit/22186005a9ff5cf052b53f8bb5aa092b9ea8a670">22186005a9ff5cf052b53f8bb5aa092b9ea8a670</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/critical_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> TensorFlow is an open source platform for machine learning. The function MakeGrapplerFunctionItem takes arguments that determine the sizes of inputs and outputs. If the inputs given are greater than or equal to the sizes of the outputs, an out-of-bounds memory read or a crash is triggered. We have patched the issue in GitHub commit a65411a1d69edfb16b25907ffb8f73556ce36bb7. The fix will be included in TensorFlow 2.11.0. We will also cherrypick this commit on TensorFlow 2.8.4, 2.9.3, and 2.10.1. <p>Publish Date: 2022-12-06 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-41902>CVE-2022-41902</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/advisories/GHSA-cg88-rpvp-cjv5">https://github.com/advisories/GHSA-cg88-rpvp-cjv5</a></p> <p>Release Date: 2022-09-30</p> <p>Fix Resolution: tensorflow - 2.8.4, 2.9.3, 2.10.1, 2.11.0, tensorflow-cpu - 2.8.4, 2.9.3, 2.10.1, 2.11.0, tensorflow-gpu - 2.8.4, 2.9.3, 2.10.1, 2.11.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2022-41902 (Critical) detected in tensorflow-2.5.0-cp37-cp37m-manylinux2010_x86_64.whl - ## CVE-2022-41902 - Critical Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tensorflow-2.5.0-cp37-cp37m-manylinux2010_x86_64.whl</b></p></summary> <p>TensorFlow is an open source machine learning framework for everyone.</p> <p>Library home page: <a href="https://files.pythonhosted.org/packages/aa/fd/993aa1333eb54d9f000863fe8ec61e41d12eb833dea51484c76c038718b5/tensorflow-2.5.0-cp37-cp37m-manylinux2010_x86_64.whl">https://files.pythonhosted.org/packages/aa/fd/993aa1333eb54d9f000863fe8ec61e41d12eb833dea51484c76c038718b5/tensorflow-2.5.0-cp37-cp37m-manylinux2010_x86_64.whl</a></p> <p>Path to dependency file: /training/requirements.txt</p> <p>Path to vulnerable library: /training/requirements.txt</p> <p> Dependency Hierarchy: - :x: **tensorflow-2.5.0-cp37-cp37m-manylinux2010_x86_64.whl** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/gabriel-milan/denoising-autoencoder/commit/22186005a9ff5cf052b53f8bb5aa092b9ea8a670">22186005a9ff5cf052b53f8bb5aa092b9ea8a670</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/critical_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> TensorFlow is an open source platform for machine learning. The function MakeGrapplerFunctionItem takes arguments that determine the sizes of inputs and outputs. If the inputs given are greater than or equal to the sizes of the outputs, an out-of-bounds memory read or a crash is triggered. We have patched the issue in GitHub commit a65411a1d69edfb16b25907ffb8f73556ce36bb7. The fix will be included in TensorFlow 2.11.0. We will also cherrypick this commit on TensorFlow 2.8.4, 2.9.3, and 2.10.1. <p>Publish Date: 2022-12-06 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-41902>CVE-2022-41902</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/advisories/GHSA-cg88-rpvp-cjv5">https://github.com/advisories/GHSA-cg88-rpvp-cjv5</a></p> <p>Release Date: 2022-09-30</p> <p>Fix Resolution: tensorflow - 2.8.4, 2.9.3, 2.10.1, 2.11.0, tensorflow-cpu - 2.8.4, 2.9.3, 2.10.1, 2.11.0, tensorflow-gpu - 2.8.4, 2.9.3, 2.10.1, 2.11.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_perf
cve critical detected in tensorflow whl cve critical severity vulnerability vulnerable library tensorflow whl tensorflow is an open source machine learning framework for everyone library home page a href path to dependency file training requirements txt path to vulnerable library training requirements txt dependency hierarchy x tensorflow whl vulnerable library found in head commit a href found in base branch master vulnerability details tensorflow is an open source platform for machine learning the function makegrapplerfunctionitem takes arguments that determine the sizes of inputs and outputs if the inputs given are greater than or equal to the sizes of the outputs an out of bounds memory read or a crash is triggered we have patched the issue in github commit the fix will be included in tensorflow we will also cherrypick this commit on tensorflow and publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tensorflow tensorflow cpu tensorflow gpu step up your open source security game with mend
0
227,029
7,526,168,354
IssuesEvent
2018-04-13 13:14:00
dankelley/oce
https://api.github.com/repos/dankelley/oce
opened
ctdTrim() not handling BIO equilibration phase well
ctd high priority
These data are not public, so I renamed the datafiles. The diagram after the sample cose shows the results. The black curve is not offset -- it's the actual data. What seems to be going on is that the ctd was lowered to about 10m for a while, then brought to the surface, and then it did a normal downcast followed by an upcast (the last with bottles, I guess). The `ctdTrim(..., method="sbe")` is not removing this early 10m portion, but I think it should. The red line is the BIO ODF file, likely created with SBE routines. I think `ctdTrim(..., method="sbe")` needs adjustment to remove this portion. Actually, I thought it was already doing that, but in any case I'll look into the code. (First I'll look in the SBE docs to see if this is standard practice.) **Code (filenames altered)** ```R library(oce) if (!interactive()) png("trim_issue.png") d <- read.oce("a.CNV") dt <- ctdTrim(d) dtsbe <- ctdTrim(d, method="sbe") d1m <- read.ctd.odf("a.ODF") plotScan(d, ylim=c(-10, max(d22[["pressure"]]+10))) lines(d1m[["scan"]], d1m[["pressure"]]+10, col=2) lines(dt[["scan"]], dt[["pressure"]]-10, col=3) lines(dtsbe[["scan"]], dtsbe[["pressure"]]-20, col=4) legend("topright", col=1:4, legend=c("raw CNV","BIO ODF","ctdTrim", "ctdTrim sbe"),lwd=1) if (!interactive()) dev.off() ``` **Results (see legend)** ![trim_issue](https://user-images.githubusercontent.com/99469/38736729-59e84a6a-3f03-11e8-82e2-55a1321a0276.png)
1.0
ctdTrim() not handling BIO equilibration phase well - These data are not public, so I renamed the datafiles. The diagram after the sample cose shows the results. The black curve is not offset -- it's the actual data. What seems to be going on is that the ctd was lowered to about 10m for a while, then brought to the surface, and then it did a normal downcast followed by an upcast (the last with bottles, I guess). The `ctdTrim(..., method="sbe")` is not removing this early 10m portion, but I think it should. The red line is the BIO ODF file, likely created with SBE routines. I think `ctdTrim(..., method="sbe")` needs adjustment to remove this portion. Actually, I thought it was already doing that, but in any case I'll look into the code. (First I'll look in the SBE docs to see if this is standard practice.) **Code (filenames altered)** ```R library(oce) if (!interactive()) png("trim_issue.png") d <- read.oce("a.CNV") dt <- ctdTrim(d) dtsbe <- ctdTrim(d, method="sbe") d1m <- read.ctd.odf("a.ODF") plotScan(d, ylim=c(-10, max(d22[["pressure"]]+10))) lines(d1m[["scan"]], d1m[["pressure"]]+10, col=2) lines(dt[["scan"]], dt[["pressure"]]-10, col=3) lines(dtsbe[["scan"]], dtsbe[["pressure"]]-20, col=4) legend("topright", col=1:4, legend=c("raw CNV","BIO ODF","ctdTrim", "ctdTrim sbe"),lwd=1) if (!interactive()) dev.off() ``` **Results (see legend)** ![trim_issue](https://user-images.githubusercontent.com/99469/38736729-59e84a6a-3f03-11e8-82e2-55a1321a0276.png)
non_perf
ctdtrim not handling bio equilibration phase well these data are not public so i renamed the datafiles the diagram after the sample cose shows the results the black curve is not offset it s the actual data what seems to be going on is that the ctd was lowered to about for a while then brought to the surface and then it did a normal downcast followed by an upcast the last with bottles i guess the ctdtrim method sbe is not removing this early portion but i think it should the red line is the bio odf file likely created with sbe routines i think ctdtrim method sbe needs adjustment to remove this portion actually i thought it was already doing that but in any case i ll look into the code first i ll look in the sbe docs to see if this is standard practice code filenames altered r library oce if interactive png trim issue png d read oce a cnv dt ctdtrim d dtsbe ctdtrim d method sbe read ctd odf a odf plotscan d ylim c max lines col lines dt dt col lines dtsbe dtsbe col legend topright col legend c raw cnv bio odf ctdtrim ctdtrim sbe lwd if interactive dev off results see legend
0
40,191
20,625,178,744
IssuesEvent
2022-03-07 21:39:57
datafuselabs/databend
https://api.github.com/repos/datafuselabs/databend
closed
Feature: Simd Selection of column filter
C-feature C-performance community-take C-good first issue
**Summary** Description for this feature. Now filter works with memcpy one by one, we can use `Vec.extend` function to copy a batch of at most 64 items one time. https://github.com/datafuselabs/databend/blob/dc058c9d22baa9e61763661f77cd10ec62c87c48/common/datavalues/src/columns/primitive/mod.rs#L187-L204 **Some useful tools:** - Bitmap's Chunk function: ``` impl Bitmap { /// Returns an iterator over bits in chunks of `T`, which is useful for /// bit operations. pub fn chunks<T: BitChunk>(&self) -> BitChunks<T> { BitChunks::new(&self.bytes, self.offset, self.length) } } ``` - Function: `__builtin_ctz` ``` int __builtin_ctz (unsigned int x) Returns the number of trailing 0-bits in x, starting at the least significant bit position. If x is 0, the result is undefined. ``` Rust's https://doc.rust-lang.org/std/primitive.u32.html#method.trailing_zeros - Blog: https://lemire.me/blog/2018/02/21/iterating-over-set-bits-quickly/
True
Feature: Simd Selection of column filter - **Summary** Description for this feature. Now filter works with memcpy one by one, we can use `Vec.extend` function to copy a batch of at most 64 items one time. https://github.com/datafuselabs/databend/blob/dc058c9d22baa9e61763661f77cd10ec62c87c48/common/datavalues/src/columns/primitive/mod.rs#L187-L204 **Some useful tools:** - Bitmap's Chunk function: ``` impl Bitmap { /// Returns an iterator over bits in chunks of `T`, which is useful for /// bit operations. pub fn chunks<T: BitChunk>(&self) -> BitChunks<T> { BitChunks::new(&self.bytes, self.offset, self.length) } } ``` - Function: `__builtin_ctz` ``` int __builtin_ctz (unsigned int x) Returns the number of trailing 0-bits in x, starting at the least significant bit position. If x is 0, the result is undefined. ``` Rust's https://doc.rust-lang.org/std/primitive.u32.html#method.trailing_zeros - Blog: https://lemire.me/blog/2018/02/21/iterating-over-set-bits-quickly/
perf
feature simd selection of column filter summary description for this feature now filter works with memcpy one by one we can use vec extend function to copy a batch of at most items one time some useful tools bitmap s chunk function impl bitmap returns an iterator over bits in chunks of t which is useful for bit operations pub fn chunks self bitchunks bitchunks new self bytes self offset self length function builtin ctz int builtin ctz unsigned int x returns the number of trailing bits in x starting at the least significant bit position if x is the result is undefined rust s blog
1
11,819
3,535,529,296
IssuesEvent
2016-01-16 15:51:51
paperjs/paper.js
https://api.github.com/repos/paperjs/paper.js
closed
Method for adding a layer to a project is not documented
cat: documentation type: feature
`project#addChild` is not documented. If a layer has been removed 'addChild' seems to be the only way to add it back.
1.0
Method for adding a layer to a project is not documented - `project#addChild` is not documented. If a layer has been removed 'addChild' seems to be the only way to add it back.
non_perf
method for adding a layer to a project is not documented project addchild is not documented if a layer has been removed addchild seems to be the only way to add it back
0
87,735
8,120,578,561
IssuesEvent
2018-08-16 03:38:55
istio/istio
https://api.github.com/repos/istio/istio
closed
`make lint` failed because of type alias in mixer
area/test and release stale
<!-- Please see https://istio.io/help and if you are a user of Istio, please file issues in https://github.com/istio/issues/issues instead of here. Only confirmed, triaged and labelled issues should be filed here. Please add the correct labels and epics (and priority and milestones if you have that information) --> Because of https://github.com/istio/istio/blob/master/pilot/pkg/networking/plugin/mixer/mixer.go#L39 ``` type attribute = *mpb.Attributes_AttributeValue ``` Seems go 1.10.3 does not recognize this type alias. ``` /Users/gyliu/go/bin/goimports /tmp/~output.go:37:16: expected type, found '=' make: *** [lint] Error 2 ``` ``` LiuGuangyas-MacBook-Pro:istio gyliu$ go version go version go1.10.3 darwin/amd64 ``` /cc @kyessenov
1.0
`make lint` failed because of type alias in mixer - <!-- Please see https://istio.io/help and if you are a user of Istio, please file issues in https://github.com/istio/issues/issues instead of here. Only confirmed, triaged and labelled issues should be filed here. Please add the correct labels and epics (and priority and milestones if you have that information) --> Because of https://github.com/istio/istio/blob/master/pilot/pkg/networking/plugin/mixer/mixer.go#L39 ``` type attribute = *mpb.Attributes_AttributeValue ``` Seems go 1.10.3 does not recognize this type alias. ``` /Users/gyliu/go/bin/goimports /tmp/~output.go:37:16: expected type, found '=' make: *** [lint] Error 2 ``` ``` LiuGuangyas-MacBook-Pro:istio gyliu$ go version go version go1.10.3 darwin/amd64 ``` /cc @kyessenov
non_perf
make lint failed because of type alias in mixer please see and if you are a user of istio please file issues in instead of here only confirmed triaged and labelled issues should be filed here please add the correct labels and epics and priority and milestones if you have that information because of type attribute mpb attributes attributevalue seems go does not recognize this type alias users gyliu go bin goimports tmp output go expected type found make error liuguangyas macbook pro istio gyliu go version go version darwin cc kyessenov
0
78,089
9,661,829,107
IssuesEvent
2019-05-20 19:06:42
brave/brave-browser
https://api.github.com/repos/brave/brave-browser
closed
Restyle chrome: pages
about-pages/rebrand design feature/user-interface priority/P2
- [x] Restyle bookmarks - [x] Restyle history - [x] Restyle preferences - [x] Downloads page
1.0
Restyle chrome: pages - - [x] Restyle bookmarks - [x] Restyle history - [x] Restyle preferences - [x] Downloads page
non_perf
restyle chrome pages restyle bookmarks restyle history restyle preferences downloads page
0
74,069
3,427,959,480
IssuesEvent
2015-12-10 06:12:21
crcn/interface-builder
https://api.github.com/repos/crcn/interface-builder
closed
compilation strategy according to various conditions
compiling feature high priority wontfix
The compilation strategy for visual components should vary. An example of this might be centering a div with a dynamic width vs fixed width. ```javascript { type: box, width: '50%', height: '50%', x: '50%', y: '50%' } { type: box, width: 100px, height: 100px, x: '50%', y: '50%' } ``` Would get compiled to: ``` <div style="width:50%;height:50%;left:50%;top:50%;transform:translate(-50%,-50%);" /> <div style="width:50%;height:50%;margin: 0px auto;" /> ``` respectively.
1.0
compilation strategy according to various conditions - The compilation strategy for visual components should vary. An example of this might be centering a div with a dynamic width vs fixed width. ```javascript { type: box, width: '50%', height: '50%', x: '50%', y: '50%' } { type: box, width: 100px, height: 100px, x: '50%', y: '50%' } ``` Would get compiled to: ``` <div style="width:50%;height:50%;left:50%;top:50%;transform:translate(-50%,-50%);" /> <div style="width:50%;height:50%;margin: 0px auto;" /> ``` respectively.
non_perf
compilation strategy according to various conditions the compilation strategy for visual components should vary an example of this might be centering a div with a dynamic width vs fixed width javascript type box width height x y type box width height x y would get compiled to respectively
0
36,317
7,888,580,110
IssuesEvent
2018-06-27 22:45:21
Azure/batch-shipyard
https://api.github.com/repos/Azure/batch-shipyard
reopened
Exception in task_file_mover when ingressing files from other batch tasks
defect
I've set up a job which contains several fetch tasks, and a single processing task that depends on the fetch tasks. For convenience, I tried using the Azure Batch `input_data` type in the processing task to get all the data from the preceding fetch tasks, but I'm running into this exception with `task_file_mover`. ``` Traceback (most recent call last): File "task_file_mover.py", line 148, in <module> main() File "task_file_mover.py", line 123, in main batch_client = _create_credentials() File "task_file_mover.py", line 60, in _create_credentials ba, url, bakey = os.environ['SHIPYARD_BATCH_ENV'].split(';') ValueError: not enough values to unpack (expected 3, got 2) ``` I'm using KeyVault for supplying the batch credentials, like: ```json { "credentials": { "batch": { "account": "myaccount", "account_key_keyvault_secret_id": "https://myvault.vault.azure.net/secrets/batchkey", "account_service_url": "https://myaccount.westus.batch.azure.com" } } } ```
1.0
Exception in task_file_mover when ingressing files from other batch tasks - I've set up a job which contains several fetch tasks, and a single processing task that depends on the fetch tasks. For convenience, I tried using the Azure Batch `input_data` type in the processing task to get all the data from the preceding fetch tasks, but I'm running into this exception with `task_file_mover`. ``` Traceback (most recent call last): File "task_file_mover.py", line 148, in <module> main() File "task_file_mover.py", line 123, in main batch_client = _create_credentials() File "task_file_mover.py", line 60, in _create_credentials ba, url, bakey = os.environ['SHIPYARD_BATCH_ENV'].split(';') ValueError: not enough values to unpack (expected 3, got 2) ``` I'm using KeyVault for supplying the batch credentials, like: ```json { "credentials": { "batch": { "account": "myaccount", "account_key_keyvault_secret_id": "https://myvault.vault.azure.net/secrets/batchkey", "account_service_url": "https://myaccount.westus.batch.azure.com" } } } ```
non_perf
exception in task file mover when ingressing files from other batch tasks i ve set up a job which contains several fetch tasks and a single processing task that depends on the fetch tasks  for convenience i tried using the azure batch input data type in the processing task to get all the data from the preceding fetch tasks but i m running into this exception with task file mover traceback most recent call last file task file mover py line in main file task file mover py line in main batch client create credentials file task file mover py line in create credentials ba url bakey os environ split valueerror not enough values to unpack expected got i m using keyvault for supplying the batch credentials like json credentials batch account myaccount account key keyvault secret id account service url
0
360,875
25,314,766,533
IssuesEvent
2022-11-17 20:34:04
BCDevOps/developer-experience
https://api.github.com/repos/BCDevOps/developer-experience
opened
SDN Oncall Documentation - How to Create a Test App for Demonstrating DataClass settings
documentation team/DXC ops and shared services NSXT/SDN
**Describe the issue** In order for on-call staff to properly assist with troubleshooting issues involving Data-guards and similar issues where pod DataClass is involved, the process by which to create a basic application stack that allows for interaction between DataClasses. The full context of the document allows for creating this from scratch, but most likely part or all of it may already be setup for use. **Additional context** Link to GitHub PR: <post here when created> **How does this benefit the users of our platform?** Proper documentation ensures all Platform Operations on-call staff have access to essential information to best help them quickly assess issues on NSX-backed Openshift clusters. **Definition of Done** - [ ] Preliminary work to test how best to create a suitable test app for on-call to play with. - [ ] create initial PR with starting content. - [ ] promote PR for internal review/approval. - [ ] update PR with requested changes as appropriate. - [ ] merge PR when approved.
1.0
SDN Oncall Documentation - How to Create a Test App for Demonstrating DataClass settings - **Describe the issue** In order for on-call staff to properly assist with troubleshooting issues involving Data-guards and similar issues where pod DataClass is involved, the process by which to create a basic application stack that allows for interaction between DataClasses. The full context of the document allows for creating this from scratch, but most likely part or all of it may already be setup for use. **Additional context** Link to GitHub PR: <post here when created> **How does this benefit the users of our platform?** Proper documentation ensures all Platform Operations on-call staff have access to essential information to best help them quickly assess issues on NSX-backed Openshift clusters. **Definition of Done** - [ ] Preliminary work to test how best to create a suitable test app for on-call to play with. - [ ] create initial PR with starting content. - [ ] promote PR for internal review/approval. - [ ] update PR with requested changes as appropriate. - [ ] merge PR when approved.
non_perf
sdn oncall documentation how to create a test app for demonstrating dataclass settings describe the issue in order for on call staff to properly assist with troubleshooting issues involving data guards and similar issues where pod dataclass is involved the process by which to create a basic application stack that allows for interaction between dataclasses the full context of the document allows for creating this from scratch but most likely part or all of it may already be setup for use additional context link to github pr how does this benefit the users of our platform proper documentation ensures all platform operations on call staff have access to essential information to best help them quickly assess issues on nsx backed openshift clusters definition of done preliminary work to test how best to create a suitable test app for on call to play with create initial pr with starting content promote pr for internal review approval update pr with requested changes as appropriate merge pr when approved
0
24,898
12,423,487,739
IssuesEvent
2020-05-24 05:56:24
sequelize/sequelize
https://api.github.com/repos/sequelize/sequelize
closed
Upsert with `returning: true` requires extra read from DB
hard type: performance type: refactor
## Issue Description `Model.upsert` accepts a `returning` option since https://github.com/sequelize/sequelize/pull/8924. However, the way this is implemented is via a platform-specific upsert that returns the record's primary key **and then** a subsequent SELECT call to fetch the record. Relevant code: https://github.com/sequelize/sequelize/blob/c66663ed70d80a5cf661f99a2139de7c927c5ffe/lib/model.js#L2495-L2501 This is problematic for a few reasons: 1. Extra round trip to database. This should be unnecessary on platforms that support returning the full record from UPDATE/INSERTs like Postgres. 2. Subsequent read may hit a read replica instead: https://github.com/sequelize/sequelize/issues/9216 3. Use of a non-repeatable read means `upsert` can return a different result than what was actually upserted. For example, on [Postgres's default READ COMMITTED transaction isolation level](https://www.postgresql.org/docs/current/transaction-iso.html), a transaction may "re-reads data it has previously read and finds that data has been modified by another transaction (that committed since the initial read)." The fact that the SELECT runs in the same transaction as the INSERT/UPDATE doesn't help. Thus, `upsert` isn't returning the record that was created or the record as it existed as of the update, it's returning the record that existed when the subsequent `findByPk` was executed. In an extreme case, the record could have even been deleted, which would mean `upsert` returns `null`. ### What are you doing? Sorry, no SSCCE since reproducing the non-repeatable read requires precise timing. It would probably require stubbing `findByPk`. <!-- If you don't want to use the SSCCE repository, you can also post a MINIMAL, SELF-CONTAINED code that reproduces the issue. It must be runnable by simply copying and pasting into an isolated JS file, except possibly for the database connection configuration. Check http://sscce.org/ or https://stackoverflow.com/help/minimal-reproducible-example to learn more about SSCCE/MCVE/reprex. --> ```js const [record] = Model.upsert({id: 1}, {returning: true}); // record may be null if the `destroy` call below ran in between the underlying update/insert and the subsequent findByPk // concurrently Model.destroy({where: {id: 1}}); ``` ### What do you expect to happen? `upsert` returns record that was inserted ### What is actually happening? `upsert` returns null ### Environment - Sequelize version: master - Node.js version: n/a - Operating System: n/a ## Issue Template Checklist <!-- Please answer the questions below. If you don't, your issue may be closed. --> ### How does this problem relate to dialects? <!-- Choose one. --> - [x] I think this problem happens regardless of the dialect. - [ ] I think this problem happens only for the following dialect(s): <!-- Put dialect(s) here --> - [ ] I don't know, I was using PUT-YOUR-DIALECT-HERE, with connector library version XXX and database version XXX ### Would you be willing to resolve this issue by submitting a Pull Request? <!-- Remember that first contributors are welcome! --> - [ ] Yes, I have the time and I know how to start. - [ ] Yes, I have the time but I don't know how to start, I would need guidance. - [x] No, I don't have the time, although I believe I could do it if I had the time... - [ ] No, I don't have the time and I wouldn't even know how to start.
True
Upsert with `returning: true` requires extra read from DB - ## Issue Description `Model.upsert` accepts a `returning` option since https://github.com/sequelize/sequelize/pull/8924. However, the way this is implemented is via a platform-specific upsert that returns the record's primary key **and then** a subsequent SELECT call to fetch the record. Relevant code: https://github.com/sequelize/sequelize/blob/c66663ed70d80a5cf661f99a2139de7c927c5ffe/lib/model.js#L2495-L2501 This is problematic for a few reasons: 1. Extra round trip to database. This should be unnecessary on platforms that support returning the full record from UPDATE/INSERTs like Postgres. 2. Subsequent read may hit a read replica instead: https://github.com/sequelize/sequelize/issues/9216 3. Use of a non-repeatable read means `upsert` can return a different result than what was actually upserted. For example, on [Postgres's default READ COMMITTED transaction isolation level](https://www.postgresql.org/docs/current/transaction-iso.html), a transaction may "re-reads data it has previously read and finds that data has been modified by another transaction (that committed since the initial read)." The fact that the SELECT runs in the same transaction as the INSERT/UPDATE doesn't help. Thus, `upsert` isn't returning the record that was created or the record as it existed as of the update, it's returning the record that existed when the subsequent `findByPk` was executed. In an extreme case, the record could have even been deleted, which would mean `upsert` returns `null`. ### What are you doing? Sorry, no SSCCE since reproducing the non-repeatable read requires precise timing. It would probably require stubbing `findByPk`. <!-- If you don't want to use the SSCCE repository, you can also post a MINIMAL, SELF-CONTAINED code that reproduces the issue. It must be runnable by simply copying and pasting into an isolated JS file, except possibly for the database connection configuration. Check http://sscce.org/ or https://stackoverflow.com/help/minimal-reproducible-example to learn more about SSCCE/MCVE/reprex. --> ```js const [record] = Model.upsert({id: 1}, {returning: true}); // record may be null if the `destroy` call below ran in between the underlying update/insert and the subsequent findByPk // concurrently Model.destroy({where: {id: 1}}); ``` ### What do you expect to happen? `upsert` returns record that was inserted ### What is actually happening? `upsert` returns null ### Environment - Sequelize version: master - Node.js version: n/a - Operating System: n/a ## Issue Template Checklist <!-- Please answer the questions below. If you don't, your issue may be closed. --> ### How does this problem relate to dialects? <!-- Choose one. --> - [x] I think this problem happens regardless of the dialect. - [ ] I think this problem happens only for the following dialect(s): <!-- Put dialect(s) here --> - [ ] I don't know, I was using PUT-YOUR-DIALECT-HERE, with connector library version XXX and database version XXX ### Would you be willing to resolve this issue by submitting a Pull Request? <!-- Remember that first contributors are welcome! --> - [ ] Yes, I have the time and I know how to start. - [ ] Yes, I have the time but I don't know how to start, I would need guidance. - [x] No, I don't have the time, although I believe I could do it if I had the time... - [ ] No, I don't have the time and I wouldn't even know how to start.
perf
upsert with returning true requires extra read from db issue description model upsert accepts a returning option since however the way this is implemented is via a platform specific upsert that returns the record s primary key and then a subsequent select call to fetch the record relevant code this is problematic for a few reasons extra round trip to database this should be unnecessary on platforms that support returning the full record from update inserts like postgres subsequent read may hit a read replica instead use of a non repeatable read means upsert can return a different result than what was actually upserted for example on a transaction may re reads data it has previously read and finds that data has been modified by another transaction that committed since the initial read the fact that the select runs in the same transaction as the insert update doesn t help thus upsert isn t returning the record that was created or the record as it existed as of the update it s returning the record that existed when the subsequent findbypk was executed in an extreme case the record could have even been deleted which would mean upsert returns null what are you doing sorry no sscce since reproducing the non repeatable read requires precise timing it would probably require stubbing findbypk if you don t want to use the sscce repository you can also post a minimal self contained code that reproduces the issue it must be runnable by simply copying and pasting into an isolated js file except possibly for the database connection configuration check or to learn more about sscce mcve reprex js const model upsert id returning true record may be null if the destroy call below ran in between the underlying update insert and the subsequent findbypk concurrently model destroy where id what do you expect to happen upsert returns record that was inserted what is actually happening upsert returns null environment sequelize version master node js version n a operating system n a issue template checklist how does this problem relate to dialects i think this problem happens regardless of the dialect i think this problem happens only for the following dialect s i don t know i was using put your dialect here with connector library version xxx and database version xxx would you be willing to resolve this issue by submitting a pull request yes i have the time and i know how to start yes i have the time but i don t know how to start i would need guidance no i don t have the time although i believe i could do it if i had the time no i don t have the time and i wouldn t even know how to start
1