question_id
int64 25
74.7M
| answer_id
int64 332
74.7M
| title
stringlengths 20
150
| question
stringlengths 23
4.1k
| answer
stringlengths 20
4.1k
|
|---|---|---|---|---|
73,536,136
| 73,536,232
|
struct with variable length array in std::variant
|
So I'm working with this struct type with variable length array member, like this:
struct Entry;
struct Data {
int members;
size_t entries_size;
Entry entries[1];
};
The entries_size member is the actual size of the entires array, it is only known when read from network. The problem is that I'd like to use it in a std::variant among other network data types, how can I put it into a std::variant with the footprint of the whole data object, without dangling pointers flying around?
std::variant<Data, OtherData2, OhterData3...> v;
|
As has been pointed out, this is not legal C++. It just wont work this way. However, if the amount of entries is bound, it might be possible to do this:
template <std::size_t N>
struct Data {
int members;
static constexpr std::size_t entries_size = N;
Entry entries[N];
}; // Of couse, you might want to use std::array<Entry,N> instead!
// ...
std::variant<Data<2>, Data<8>, Data<32>> example;
This, of course, is highly situational. If you know the incoming data will fall into either 2, 8 or 32 entries, you can specify these at compile time. If the amount of entries is completely variable, with no guarantees, use std::vector. That's what it's for.
|
73,536,861
| 73,567,816
|
How to handle multiple inheritance of same methods or diamond problem with CRTP static polymorphism?
|
I want to implement polymorphism statically with CRTP. I want to create several base classes that provide functionalities. However the functionalities can be overlapping. However if they overlap, they are identical.
Suppose I have
template<class derived> class Boxer {
public:
void walk(int nsteps) {
for (auto _ = nsteps; _--;) static_cast<derived&>(*this).step();
}
void punch() { static_cast<derived&>(*this).moveArm(); }
protected:
~Boxer() = default;
};
template<class derived> class ChessPlayer {
public:
void walk(int nsteps) {
for (auto _ = nsteps; _--;) static_cast<derived&>(*this).step();
}
void playChess() { static_cast<derived&>(*this).think(); }
protected:
~ChessPlayer() = default;
};
class ChessBoxer : public Boxer<ChessBoxer>, public ChessPlayer<ChessBoxer> {
public:
void step() { std::cout << "one step at a time \n"; }
void moveArm() { std::cout << "moving my arm\n"; }
void think() { std::cout << "thinking\n"; }
};
int main(int argc, const char * argv[]) {
ChessBoxer vec;
vec.walk();
vec.punch();
vec.playChess();
return 0;
}
Both Boxer and Chess Player provide walk. Both definition of walk are identical.
By the way, I could (and probably I should) rewrite the code above to avoid the duplication of walk's code .
template<class derived, class top> class Walker {
public:
void walk(int nsteps) {
for (auto _ = nsteps; _--;) static_cast<top&>(*this).step();
}
protected:
~Walker() = default;
};
template<class derived> class Boxer : public Walker<Boxer<derived>, derived> {
public:
void punch() { static_cast<derived&>(*this).moveArm(); }
protected:
~Boxer() = default;
};
template<class derived> class ChessPlayer : public Walker<ChessPlayer<derived>, derived> {
public:
void playChess() { static_cast<derived&>(*this).think(); }
protected:
~ChessPlayer() = default;
};
class ChessBoxer : public Boxer<ChessBoxer>, public ChessPlayer<ChessBoxer> {
public:
void step() { std::cout << "one step at \n"; }
void moveArm() { std::cout << "moving my arm\n"; }
void think() { std::cout << "thinking\n"; }
};
int main(int argc, const char * argv[]) {
ChessBoxer vec;
vec.walk(3);
vec.punch();
vec.playChess();
return 0;
}
But still that creates the diamond problem.
How can I solve this problem, keeping static polymorphism? Also I want the final derived class to not have to bother with technicalities.
|
I found the answer to my question. I do not take credit for the solution.
One possible solution can be found at the following webpage
https://www.fluentcpp.com/2018/08/28/removing-duplicates-crtp-base-classes/
of Jonathan Boccara's blog.
Another solution is provided in Matthew Borkowski's comment within the same page, and it links to the code http://coliru.stacked-crooked.com/a/463db3673b139429
|
73,537,185
| 73,537,222
|
Left View Of a Binary Tree
|
To find set of all nodes that are visible from left side of binary tree.
vector<int> getLeftView(TreeNode<int> *root)
{
static vector<int> res;
// Your code here
if(root){
res.push_back(root->data);
if(root->left)
getLeftView(root->left);
else
getLeftView(root->right);
}
return res;
}
For a single test case at a time it works fine. But when multiple test cases are run, the previous values in the vector is appended by the new values. How do I clear the vector before running the next test case?
|
You used static because you need a single instance of the vector to be used across the recursion. But static is not the way; it causes there to be just one instance of the vector in the entire program as such.
There are various solutions, one of which is to split the function into the API and recursive part:
void getLeftViewRec(TreeNode<int> *root, vector<int> &res)
{
if(root){
res.push_back(root->data);
if(root->left)
getLeftView(root->left, res);
else
getLeftView(root->right, res);
}
return res;
}
vector<int> getLeftView(TreeNode<int> *root)
{
vector<int> res;
getLeftViewRec(root, res);
return res;
}
Now what happens is that every time getLeftView is called, a new vector res is instantiated as a local variable. It then calls the recursive function getLeftViewRec which receives res by reference, and passes it to itself through the recursive calls, so the recursion is working with a single vector, accumulating into it.
|
73,537,755
| 73,543,913
|
What's the point of `viewable_range` concept?
|
[range.refinements]
The viewable_range concept specifies the requirements of a range type that can be converted to a view safely.
It's mandatory implementation roughly states that a range further satisfies viewable_range if either
it's simply a view, e.g. std::string_view, or
it's an lvalue reference (even when its reference-removed type is not a view), e.g. std::vector<int>&, or
it's a movable object type (i.e. not reference type), e.g. std::vector<int>
My questions are:
What idea does this concept capture? Specifically, how could its instance "be converted to a view safely" and why do I want such conversion? What does "safety" even mean here?
Do you always use viewable_range to constrain universal reference? (i.e. the only chance T to be deduced to lvalue reference type.) This is the case for standard range adaptor (closure) objects.
Range adaptor (closure) objects are modified to take a range as first parameter in C++23. Is there other usage for viewable_range concept since then?
For application development, when to use viewable_range instead of view or range?
|
What idea does this concept capture? Specifically, how could its instance "be converted to a view safely" and why do I want such conversion? What does "safety" even mean here?
Any range can be converted to a view, simply by doing this:
template <input_range R>
auto into_view(R&& r) {
return ref_view(r);
}
But this isn't really a great idea. If we had an rvalue range (whether it's a view or not), now we're taking a reference to it, so this could dangle if we hold onto the resulting view too long.
In general, we don't want to hold onto references to views - the point of a view is that range adaptors hold them by value, not by reference. But also in general, we don't want to hold non-view ranges by value, since those are expensive to copy.
What viewable_range does is restrict the set of ranges to those where we can covert to a view without added concern for dangling:
views should always be by value - so an lvalue view is only a viewable_range if it is copyable. An rvalue view is always a viewable_range (because views have to be movable).
an lvalue non-view range is always a viewable_range because we take a ref_view in that case. This of course has potential for dangling, but we're taking an lvalue, so it's the safer case.
an rvalue non-view range was originally rejected (because our only option was ref_view and we don't want to refer in this case), but will start to be captured as owning_view (as of P2415).
So basically, the only thing that isn't a viewable range is an lvalue non-copyable view, because of the desire to avoid taking references to views.
Do you always use viewable_range to constrain universal forwarding reference? (i.e. the only chance T to be deduced to lvalue reference type.) This is the case for standard range adaptor (closure) objects.
No. Only if what you want to do with the forwarding reference is convert it to a view and store the resulting view. Range adaptors do this, but algorithms don't need to - so they shouldn't use that constraint (none of the standard library algorithms do).
Range adaptor (closure) objects are modified to take a range as first parameter in C++23. Is there other usage for viewable_range concept since then?
The term range adaptor closure object is relaxed because now that we can have user-defined range adaptor closure objects (P2387), we can't really enforce what it is that they actually do.
But the standard library range adaptor closure objects still do require viewable_range (by way of all_t).
For application development, when to use viewable_range instead of view or range?
This is really the same question as (2). If what you want is to take any range and convert it to a view to be stored, you use viewable_range - so when you're writing a range adaptor.
|
73,537,920
| 73,547,405
|
LLVM opt unable to print instructions from a specific function, But it does for the rest of the functions in the IR
|
I am new to llvm framework and was able to run a basic pass to iterate over instructions in a simple IR function with only entry basic block, but to expand upon that I got an .ll file from clang for a simple c function ( don't mind the correctness of the function I don't care about it for the sake of learning llvm at least for now ).
// fact.c
int fact(int n){
int t =1;
for(int i = 2;i<=n;i++){
t = t*i;
}
return t;
}
I was able to get a fact.ll file for this function, given below, and there are 3 functions in the fact.ll which I hand coded into the IR. foo , add and bar. And I attempt to run a simple pass which will iterate over each BasicBlock and gather it's inst opcodes and simply print them at the end, My issue is the opt tool is able to do it for foo, add and bar functions but not for the fact function.
Pass file :
#include "llvm/Transforms/Utils/MyHello.h"
#include <string>
using namespace llvm;
PreservedAnalyses MyHelloPass::run(Function &F,FunctionAnalysisManager &AM) {
std::string output;
errs()<<F.getName()<<"\n";
for(Function::iterator BB = F.begin();BB!=F.end();BB++){
for(BasicBlock::iterator I = BB->begin();I!=BB->end();I++){
output+=(I->getOpcodeName());
output+='\n';
}
}
errs()<<output<<'\n';
return PreservedAnalyses::all();
}
fact.ll
; ModuleID = 'fact.c'
source_filename = "fact.c"
target datalayout = "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-f80:128-n8:16:32:64-S128"
target triple = "x86_64-unknown-linux-gnu"
; Function Attrs: noinline nounwind optnone uwtable
define dso_local i32 @fact(i32 noundef %n) #0 {
entry:
%n.addr = alloca i32, align 4
%t = alloca i32, align 4
%i = alloca i32, align 4
store i32 %n, i32* %n.addr, align 4
store i32 1, i32* %t, align 4
store i32 2, i32* %i, align 4
br label %for.cond
for.cond: ; preds = %for.inc, %entry
%0 = load i32, i32* %i, align 4
%1 = load i32, i32* %n.addr, align 4
%cmp = icmp sle i32 %0, %1
br i1 %cmp, label %for.body, label %for.end
for.body: ; preds = %for.cond
%2 = load i32, i32* %t, align 4
%3 = load i32, i32* %i, align 4
%mul = mul nsw i32 %2, %3
store i32 %mul, i32* %t, align 4
br label %for.inc
for.inc: ; preds = %for.body
%4 = load i32, i32* %i, align 4
%inc = add nsw i32 %4, 1
store i32 %inc, i32* %i, align 4
br label %for.cond, !llvm.loop !6
for.end: ; preds = %for.cond
%5 = load i32, i32* %t, align 4
ret i32 %5
}
define i32 @foo(){
%a = add i32 2,3
ret i32 %a
}
define i32 @add(i32 %a,i32 %b){
%c = add i32 %a,%b
%d = add i32 %c,%c
%e = sub i32 %c, %d
%f = mul i32 %d, %e
ret i32 %f
}
define void @bar(){
ret void
}
attributes #0 = { noinline nounwind optnone uwtable "frame-pointer"="all" "min-legal-vector-width"="0" "no-trapping-math"="true" "stack-protector-buffer-size"="8" "target-cpu"="x86-64" "target-features"="+cx8,+fxsr,+mmx,+sse,+sse2,+x87" "tune-cpu"="generic" }
!llvm.module.flags = !{!0, !1, !2, !3, !4}
!llvm.ident = !{!5}
!0 = !{i32 1, !"wchar_size", i32 4}
!1 = !{i32 7, !"PIC Level", i32 2}
!2 = !{i32 7, !"PIE Level", i32 2}
!3 = !{i32 7, !"uwtable", i32 2}
!4 = !{i32 7, !"frame-pointer", i32 2}
!5 = !{!"AMD\C2\A0\C2\A0-DCLANG_REPOSITORY_STRING=CLANG: clang version 15.0.0 (CLANG: Jenkins CPUPC_Mirror_To_Staging_Merge-Build#892) (based on LLVM Mirror.Version.14.0.0)"}
!6 = distinct !{!6, !7}
!7 = !{!"llvm.loop.mustprogress"}
run command : opt -disable-output fact.ll -passes="myhello"
Ouput :
foo
add
ret
add
add
add
sub
mul
ret
bar
ret
|
See the optnone in:
; Function Attrs: noinline nounwind optnone uwtable
define dso_local i32 @fact(i32 noundef %n) #0 {
That means that this function is opting out of optimizations, hence your pass will not be run on that function.
You can manually remove the optnone from the definition of #0 at the bottom (note: the ; Function Attrs: ... line is merely a comment, changing it has no effect) or you can build your LLVM IR with "clang -O2". You may want to also add -mllvm -disable-llvm-optzns if you want clang to produce IR that could be optimized but hasn't been run through LLVM passes.
|
73,538,141
| 73,538,641
|
How to assign a specific number to all elements of matrix in vector<vector<int>> matrix_name without using for loop stuff?
|
like we do this in array thing to assign a specific num to all elements of array-
vector<int> arr(n,-1);
So ya,above i mention how to do it in array but what about vector<vector> matrix_name?
|
Just as you can initialize a vector with a number of integers, given a default integer value, you can also initialize a vector with a number of other vectors, given a default value for that other vector. That way you can easily create a multidimensional vector.
Example:
int nr_of_rows = 30;
int nr_of_columns = 40;
std::vector matrix(nr_of_rows , std::vector(nr_of_columns , -1));
This creates a vector of 30 vectors of 40 integers with a default value of -1.
matrix[2][17] gives you vector with index 17 inside vector with index 2 of your matrix.
|
73,538,190
| 73,538,359
|
C++ Sized template deduction
|
I want to create a wrapper around with std::array with some extended functionality. The class should be instantiable via an std::initializer_list. I also need a constructor to which I can pass an existing instance of the class plus a "suffix", meaning that the instance need to be of size + 1.
#include <stddef.h>
#include <array>
#include <initializer_list>
template<size_t L>
class JsonPointer {
public:
constexpr JsonPointer(std::array<const char*, L> segments)
: m_path_segments(std::move(segments))
{
}
constexpr JsonPointer(std::initializer_list<const char*> list) : m_path_segments(std::array<const char*, list.size()>)
{
}
template<size_t S>
JsonPointer(JsonPointer<S> existing, const char* suffix) {
}
private:
std::array<const char*, L> m_path_segments;
};
int main(){
constexpr JsonPointer base_ptr {"a", "n"};
auto ptr = JsonPointer(base_ptr, "test");
}
How can I use the size of std::initializer_list for constexpr initialization? In the "expansion", how can I make the deducation realize that S=L+1?
|
with C++20 you can achieve this by adding 2 user defined deduction guides:
template<typename... Ts>
JsonPointer(Ts... ts)->JsonPointer<sizeof...(Ts)>;
template<size_t S>
JsonPointer(JsonPointer<S> existing, const char* suffix)->JsonPointer<S + 1>;
they allow the compiler to find the correct deduction without naming the full template.
template<size_t L>
class JsonPointer {
public:
constexpr JsonPointer(std::array<const char*, L> segments)
: m_path_segments(std::move(segments))
{
}
constexpr JsonPointer(std::initializer_list<const char*> list) : m_path_segments() {
std::copy(list.begin(), list.end(), this->m_path_segments.begin());
}
template<size_t S>
constexpr JsonPointer(JsonPointer<S> existing, const char* suffix) : m_path_segments() {
std::copy(existing.m_path_segments.begin(), existing.m_path_segments.end(),
this->m_path_segments.begin());
this->m_path_segments.back() = std::move(suffix);
}
template<size_t S>
friend class JsonPointer;
private:
std::array<const char*, L> m_path_segments;
};
template<typename... Ts>
JsonPointer(Ts... ts)->JsonPointer<sizeof...(Ts)>;
template<size_t S>
JsonPointer(JsonPointer<S> existing, const char* suffix)->JsonPointer<S + 1>;
int main() {
constexpr JsonPointer base_ptr{ "a", "n" };
constexpr auto ptr = JsonPointer(base_ptr, "test");
}
the constexpr ptr now holds {"a", "n", "test"}
try it out
|
73,538,780
| 73,538,982
|
Program cannot be executed because Cygcurl-4.dll was not found
|
What I did:
I downloaded the code and applications from a c++ course for projects that I was doing online.
I am using windows 10, everything like mingw, cygwin etc. are installed and path is also setted up in the system
Cygwin is installed and setted up with default packages.
Issue: When I try to run the .exe application file it give error the Cygcurl-4.dll was not found as in below screenshot
screenshot 1
In Cygwin setup, I really did not found any package with this name 'Cygcurl-4.dll'
screenshot 2
I have also tried to find this on google but could not find how to fix this in windows 10.
|
The lib is here
https://cygwin.com/packages/summary/libcurl4.html
libcurl4 shared object is usr/bin/cygcurl-4.dll for Windows (you can see in package files)
Be careful to install the x86_64 version if your system is 64 bits.
|
73,538,806
| 73,539,679
|
Visual Studio exhibits inconsistent behavior in Create a new C++ project
|
I'm using Visual Studio Community 2022 (64-bit), Version 17.1.6 according to VS Help->About but 17.3.2 according to VS Installer, on Windows 10. I've been programming in C# for some time and decided to try C++.
I downloaded the workload Desktop development with C++ through VS Installer->Modify. But I cannot see the C++ project templates in the Create a new project form.
If I click Next in the above form, a C# project is created. Using VS Installer I
run Repair VS
uninstalled Desktop development with C++
installed Desktop development with C++ again.
When I start VS from VS Installer->Launch, I get a long list of C++ templates.
But when starting VS directly, without Installer, no templates are found.
|
I have found the reason for this behavior. I start VS using an icon (shortcut) pinned to start in Windows 10. This icon was pointing at an old version of VS. When VS was updated and a new workload was added, this icon still pointed to the old VS version.
I removed this icon and created a new one, and everything works fine now.
A VS shortcut points at ..\Microsoft Visual Studio\2022\Common7\IDE\devenv.exe. In my case, there were two copies of this file tree; one under C:\Program Files\ and the other under D:. The other was correct, but at some update VS Installer neglected this configuration and selected the C: drive by default.
|
73,539,348
| 73,539,390
|
How to make a vectors unmodifiable when passing?
|
My wording might not be correct, but I hope you'll understand what I mean.
Basically I have a map<enum class, vector<struct*>>. I want to pass the whole map without the vectors' contents being modifiable.
I need the structs as pointers, because I save references to some of them e.g. to the player instance and they'll get invalidated otherwhise.
It seems I can neither pass the map nor the vectors by reference. So this..
std::map<myEnum, std::vector<const myStruct&>> getMap() {
return myMap;
}
..aswell as..
const std::map<myEnum, std::vector<myStruct*>>& getMap() {
return myMap;
}
doesnt work.
Is there any way to solve this?
Sorry if the question is dumb, I'm kinda new to C++ and often don't know what to search for.
Thanks for you help!
|
First of all, you should declare the member function itself const, otherwise it can't pick the member function for the const circumstance when the object itself is const.
Second, you don't have to think about the exact typing, you can let the compiler figure it out by saying const auto&. then it will return a constant reference, so it won't be modifiable:
const auto& GameLogic::getMap() const {
return myMap;
}
|
73,539,727
| 73,540,306
|
Valgrind relative paths in suppression file
|
Valgrind can generate suppression file, but it contains absolute paths to libraries by default. But I want to share this suppression file between multiple computers, my project may be stored on different paths.
How can I specify relative paths to libraries in .supp file?
|
The documentation says:
Locations may be names of either shared objects, functions, or source lines. They begin with obj:, fun:, or src: respectively. Function, object, and file names to match against may use the wildcard characters * and ?. Source lines are specified using the form filename[:lineNumber].
Therefore, you can use wildcards and specify relative paths.
For an inspiration see the default suppression file shipped with valgrind (on my computer it is at /usr/lib/valgrind/default.supp), or various *.supp files on the git mirror.
|
73,540,342
| 74,411,216
|
Why does std::initializer_list lack cbegin/cend/empty etc. methods?
|
I wonder why std::initializer_list has begin/end methods, but lacks cbegin/cend/empty, etc. methods.
Minimizing std::initializer_list is one possibility, however, what is the point of minimizing it? I think such methods do not just belong to the containers, they are also useful for std::initializer_list.
Why?
|
Probably to minimize std::initializer_list, otherwise it would be duplicated with std::array.
|
73,541,305
| 73,560,332
|
Can't operate on vectors in C++ (Vscode)
|
I'm using the g++ compiler and used the https://code.visualstudio.com/docs/languages/cpp guide to install everything. Never had any sort of problem.
I can initialise a vector from the standard library, but as soon as I attempt to initialise it with values, or add to it, or print its size after adding to it, I get a blank line in the console. There is no compilation error and i've tried compiling with the -std flag as 'c++11' and 'c++17'. The odd thing is that even if I put a cout statement before I add to the array, then it won't even output anything - it's like it just halts the whole program.
I am using vscode, and I've read of vaguely similar issues but none of the problems are identical and none of the solutions have worked. My code is below:
Imports:
#include <iostream>
#include <limits>
#include <vector>
#include <string>
Main function:
int main()
{
std::cout << "BEFORE";
std::vector<int> data;
std::cout << data.size(); // Sometimes outputs 0 if the vector is not modified but prints nothing if it is(even after the statement)
data.push_back(20);
std::cout << data.size();
std::cout << "AFTER";
}
Output:
UPDATE: Flushing the buffer and updating mingw haven't changed anything.
|
Thanks to @n. 1.8e9-where's-my-share m. I was able to find an answer. The problem was to do with Vscode not being able to display the vector. I ran the compiled .exe using the mingw64 console (outside of vsc, not the internal one) and it produced the correct output.
|
73,542,624
| 73,545,448
|
Define a lambda creation
|
How can I make a general lambda creator, e.g, something like this:
#define LAMBDA(f, args...)
to create lambda:
[&](){ return f(args...); };
So I can do:
int main
{
int a, b, c, d;
auto lambda4 = LAMBDA(foo4, a, b , c, d);
int e, f;
auto lambda2 = LAMBDA(foo2, e, f);
}
I'm restricted to using C++14.
|
Please do not use macros for this. It just obfuscates the code.
The best solution is std::bind_front:
auto l4 = std::bind_front(foo4, a, b, c, d);
But that's C++20.
With C++14 you can use std::bind. However you state you cannot use it.
We are then left with implementing a bind full function. Getting one off the ground with copy semantics is pretty simple:
template <class F, class... Args>
constexpr auto bind_full(F f, Args... args)
noexcept (noexcept (f(args...)))
{
return [=]() {
return f(args...);
};
}
auto foo_bind = bind_full(foo4, a, b, c, d);
return foo_bind()
Optimizing it for forward semantics is deceptively hard though. You can't do it with a lambda, you need a custom functor with overloaded function call operators.
|
73,542,960
| 73,545,651
|
How To Properly Call a Method From a Mocked Method in Google Test/Mock
|
In Gmock, I'm trying to get a mocked method to sleep for a few milliseconds and then call a method in the Class under Test. Here is an example of what I'm trying to do:
EXPECT_CALL(mockedClass, mockedMethod())
.Times(1)
.WillOnce(DoAll(Wait(100), ClassUnderTest.MethodToCall()));
I've defined Wait() a little higher up as:
ACTION_P(Wait, ms) { Sleep(ms); }
The problem is that I can't seem to get away from this compiler error:
Error C2664: cannot convert argument 2 from 'const Action2' to 'const testing::Action<F>&'
I just started using Google Test/Mock recently and nothing I've tried or can find seems to do anything about the problem.
Can anyone help me understand how to properly call a method from a mocked method in Google Test/Mock?
|
Try using InvokeWithoutArgs, smth like this:
EXPECT_CALL(mockedClass, mockedMethod())
.WillOnce(DoAll(Wait(100), InvokeWithoutArgs([&ClassUnderTest]() { ClassUnderTest.MethodToCall(); } ));
Times(1) is not needed with WillOnce.
|
73,543,069
| 73,545,728
|
Scons set s directory as NoClean
|
I'm using scons as my build system of c++.
There's a sub directory that contains a static library.
I've tried to set:
NoClean("${PATH_TO_DIR}")
But the files in this directory are still removed by scons -c.
Is there a way to prevent this command from removing all files generated in this directory?
|
The flag -c works more or less like this:
List all the files that could be built by this call of scons if the flag was not there.
Add to the list relevant files marked to be cleaned with Clean().
Delete from the list the files marked with NoClean() (non-recursively).
Remove all the files that are remaining on the list from the filesystem.
SCons is very focused on files and it doesn't work that well with directories. Usually, it just creates them automatically whenever they are needed and for the rest of the time it pretends they doesn't exist and it is all a flat file-system. It doesn't even delete automatically created directories so your NoClean() is double-ineffective :-) (You would need to call Clean() on the directory for SCons to be able to remove it during cleaning.)
I think your only option is to call NoClean() for every file that you create in this directory. (If you have the list/set of those files lying somewhere, you can just call NoClean() once, passing the list to it.)
|
73,543,875
| 73,546,736
|
Absolute hysteresis calculation in C++
|
I want to implement a template function, which detects if the difference of ValueA and ValueB is bigger than a given hystersis.
e.x.
ValueA=5, ValueB=7, Hystersis=1 -> true
ValueA=5, ValueB=7, Hystersis=3 -> false
ValueA=-5, ValueB=1, Hystersis=7 -> false
So I implemented this function:
template<typename T>
bool MyClass::IsHysteresisExceeded(T ValueA, T ValueB, T Hysteresis) {
T ValueMax = std::max(ValueA, ValueB);
T ValueMin = std::min(ValueA, ValueB);
return (ValueMax - ValueMin) > Hysteresis;
}
But with the following parameters this function returns false when I expected true as result.
IsHysteresisExceeded<int>(-2147483648, 2147483647, 10)
I know that a integer overflow occurs while subtracting, but I did not find an elegant solution yet.
|
I have the following solution for integers:
template<typename T>
bool IsHysteresisExceeded(T ValueA, T ValueB, T Hysteresis) {
T ValueMax = std::max(ValueA, ValueB);
T ValueMin = std::min(ValueA, ValueB);
assert(Hysteresis >= 0);
T underflowRange = std::numeric_limits<T>::min() + Hysteresis;
bool underflow = underflowRange > ValueMax;
return !underflow && (ValueMax - Hysteresis > ValueMin);
}
The trick is to detect the underflow. If it happens you may be sure ValueMin is in range <ValueMax,std::numeric_limits<T>::min()> and
(ValueMax - Hysteresis) < std::numeric_limits<T>::min() <= ValueMin
I posted the code on godbolt.org
Edit:
My previous answer used a very popular approach and was also wrong. I proposed to detect the underflow like:
T lowBound = ValueMax - Hysteresis;
bool underflow = lowBound > ValueMax;
Although it produces expected results on the architectures i know, it is an undefined behavior.
|
73,544,311
| 73,544,579
|
What mechanism ensures that std::shared_ptr control block is thread-safe?
|
From articles like std::shared_ptr thread safety, I know that the control block of a std::shared_ptr is guaranteed to be thread-safe by the standard whilst the actual data pointed to is not inherently thread-safe (i.e., it is up to me as the user to make it so).
What I haven't been able to find in my research is an answer to how this guaranteed. What I mean is, what mechanism specifically is used to ensure that the control block is thread safe (and thus an object is only deleted once)?
I ask because I am using the newlib-nano C++ library for embedded systems along with FreeRTOS. These two are not inherently designed to work with each other. Since I never wrote any code to ensure that the control block is thread safe (e.g., no code for a critical section or mutex), I can only assume that it may not actually be FreeRTOS thread-safe.
|
There isn't really much machinery required for this. For a rough sketch (not including all the requirements/features of the standard std::shared_ptr):
You only need to make sure that the reference counter is atomic, that it is incremented/decremented atomically and accessed with acquire/release semantics (actually some of the accesses can even be relaxed).
Then when the last instance of a shared pointer for a given control block is destroyed and it decremented the reference count to zero (this needs to be checked atomically with the decrement using e.g. std::atomic::fetch_add's return value), the destructor knows that there is no other thread holding a reference to the control block anymore and it can simply destroy the managed object and clean up the control block.
|
73,544,740
| 73,544,885
|
How to pass std::thread as thread id in PostThreadMessage?
|
How could I pass the id of a std::thread thread as the id into PostThreadMessage?
Like suppose, I have a thread:
// Worker Thread
auto thread_func = [](){
while(true) {
MSG msg;
if(GetMessage(&msg, NULL, 0, 0)) {
// Do appropriate stuff depending on the message
}
}
};
std::thread thread_not_main = std::thread(thread_func);
And then I want to send a message from my main thread to the above thread so that I can handle the message in a non-expensive way. So as to not interrupt the main thread.
Like:
// Main Thread
while(true) {
MSG msg;
while(GetMessage(&msg, NULL, 0, 0)) {
TranslateMessage(&msg);
if(msg.message == WM_PAINT) {
PostThreadMessage(); // How do I pass the thread id into the function?
} else {
DispatchMessage(&msg);
}
}
}
The summary of the problem is that
PostThreadMessage requires a thread-id to be passed in as a parameter,
now std::thread::get_id doesn't provide it in a "DWORD convertible format". So then I can't pass the thread's id as a parameter.
My question is: How would I pass the thread id as a parameter to PostThreadMessage?
|
You can get the underlying, "Windows-style" thread handle for a std::thread object by calling its native_handle() member function. From that, you can retrieve the thread's ID by calling the GetThreadId WinAPI function, and passing that native handle as its argument.
Here's a short code snippet that may be what you'd want in the case you've outlined:
auto tHandle = thread_not_main.native_handle(); // Gets a "HANDLE"
auto tID = GetThreadId(tHandle); // Gets a DWORD ID
if (msg.message == WM_PAINT) {
PostThreadMessage(tID, msg, wParam, lParam);
}
else {
DispatchMessage(&msg);
}
//...
|
73,544,840
| 73,545,430
|
In C++, if a funcion takes in a const std::string& as input, can I always call with std::move() on the string input?
|
I don't know much about the function, except it takes a const std::string&, and I want to call this function from inside a class, and the string input I'm sending in is returned from an instance function on this class.
Is std::move() usage here always safe and more performant, given what we know?
//In some header file:
void some_func_I_only_know_its_signature(const std::string& string_input);
public MyClass{
public:
void myFunc(){
some_func_I_only_know_its_signature(std::move(getMyString()));
}
private:
std::string getMyString(){
return myString;
}
std::string myString_;
};
|
std::move actually does absolutely nothing here:
some_func_I_only_know_its_signature(std::move(getMyString()));
The return value of getMyString is already an rvlaue. The thing is std::move actually doesn't move anything. This is a common misconception. All it does is cast an lvalue to an rvalue. If the value is an rvalue and has a move constructor (sdt::string does) it will get moved. But in this case, since the function does not expect an rvalue reference either way you are just going to pass a reference to the return value of getMyString() So you can just:
some_func_I_only_know_its_signature(getMyString());
That being said the most performant ways is to just:
some_func_I_only_know_its_signature(myString_);
The function expects a const& which means it only wants read access. Your getMyString() function creates a copy of myString_ and returns it. You don't have to create a copy here, you can just pass a reference to your string directly. get/set functions are usually used for controlled public access to a private field from outside the class.
|
73,545,297
| 73,546,158
|
Is there any way to construct an std::initializer_list from an unknown number of arguments?
|
Hi there so im having some trouble battling a silly API. We have a function that is something along the lines of
void foo(std::initializer_list<T> access_list);
And what I want to do is take a run-time JSON array and use it to call this function. for example suppose the JSON was
data : [
{
x : 10,
y : 20
},
{
x : 30,
y : 40
},
...
]
Then id want to call foo with foo({10,20,30,40}).
The problem is, the JSON array can be any length so id have to construct the list in a loop (i.e. into a vector) and then call foo on the constructed list. This is not possible as std::initializer_list does not have any functions to modify it after its initialisation and has no way of converting from a container (such as vector/array etc.) to an std::initializer_list.
I understand this is a misuse of std::initializer_list but is there any way (macros welcome) to create such a list?
I think one approach might be to convert the std::vector into a parameter pack and then a macro on the parameter pack to form the std::initializer_list but im not exactly sure how that would look. Thanks :).
|
If you are stuck with void foo(std::initializer_list<T> access_list); and you've populated a vector<T> with the data you'd like to supply to foo<T>, then you could build a runtime translator.
Caveats:
The max number of elements you aim to support must be known at compile time.
It's terribly slow to compile.
It instantiates a function template 1½ times the number of elements in the initializer_list<T> you aim to support.
It uses a binary search to find the correct overload in run-time. This is pretty quick though.
If you aim to support relatively few elements (I use 512 in my example) you may be able to live with this.
namespace detail {
template<class Func, class T, size_t... Is>
decltype(auto) helper(Func&& f, const std::vector<T>& vec,
std::index_sequence<Is...>)
{
if constexpr(sizeof...(Is) > 512) { // will throw a runtime exception
throw std::runtime_error("more than 512 elements not supported");
} else {
if(sizeof...(Is) + 255 < vec.size())
return helper(std::forward<Func>(f), vec,
std::make_index_sequence<sizeof...(Is) + 256>{});
if(sizeof...(Is) + 127 < vec.size())
return helper(std::forward<Func>(f), vec,
std::make_index_sequence<sizeof...(Is) + 128>{});
if(sizeof...(Is) + 63 < vec.size())
return helper(std::forward<Func>(f), vec,
std::make_index_sequence<sizeof...(Is) + 64>{});
if(sizeof...(Is) + 31 < vec.size())
return helper(std::forward<Func>(f), vec,
std::make_index_sequence<sizeof...(Is) + 32>{});
if(sizeof...(Is) + 15 < vec.size())
return helper(std::forward<Func>(f), vec,
std::make_index_sequence<sizeof...(Is) + 16>{});
if(sizeof...(Is) + 7 < vec.size())
return helper(std::forward<Func>(f), vec,
std::make_index_sequence<sizeof...(Is) + 8>{});
if(sizeof...(Is) + 3 < vec.size())
return helper(std::forward<Func>(f), vec,
std::make_index_sequence<sizeof...(Is) + 4>{});
if(sizeof...(Is) + 1 < vec.size())
return helper(std::forward<Func>(f), vec,
std::make_index_sequence<sizeof...(Is) + 2>{});
if(sizeof...(Is) < vec.size())
return helper(std::forward<Func>(f), vec,
std::make_index_sequence<sizeof...(Is) + 1>{});
// time to do the actual call:
return f({vec[Is]...});
}
}
} // namespace detail
template<class Func, class T>
decltype(auto) call(Func&& f, const std::vector<T>& vec) {
return detail::helper(std::forward<Func>(f), vec,
std::make_index_sequence<0>{});
}
You'd then call foo<T> like so:
std::vector<T> vec = ...;
call(foo<T>, vec);
Demo - likely to time out during compilation - but works if you don't have such a limit.
|
73,545,353
| 73,546,965
|
openssl Could Not Find libcrypto-3-x64.dll
|
I connected openssl to my project and it compiles and runs well, but next to the program after compiling there is a file libcrypto-3-x64.dll, without which the program will not run, so the question is how do I use openssl without this dll how to integrate it into the project?
I found out what the problem is, with the dynamic library it does not exist, EVP_CIPHER_CTX_t ctx(EVP_CIPHER_CTX_new()); when I compile a program (static lib openssl) with this line there are so many errors unresolved external character (example __imp_getsockname)
VERSION 3.0.5
I compiled the library according to this guide https://youtu.be/PMHEoBkxYaQ x64 static and still errors like this "unresolved external symbol __imp_WSAGetLastError" "unresolved external symbol __imp_CertOpenStore.".
|
I solved my problem and leave the answer here, just add 2 libraries in Linker => Additional Dependencies
Ws2_32.lib
Crypt32.lib
|
73,546,146
| 73,546,739
|
how to call a function through a variable number of parameters
|
How to make a function call with a variable number of parameters?
to make it look something like this:
if (f(args...))
Example to reproduce:
template <class callable, class... arguments>
void timer(callable&& f, arguments&&... args )
{
f(args...);
}
class Client
{
public:
void receive(int, int)
{
}
void sub(int x, int y)
{
timer(&Client::receive, this, x, y);
}
};
int main()
{
Client cl;
cl.sub(1,2);
}
main.cpp:4:6: error: must use ‘.*’ or ‘->*’ to call pointer-to-member function in ‘f (...)’, e.g. ‘(... ->* f) (...)’
4 | f(args...);
| ~^~~~~~~~~
|
You can use std::invoke(f, args...); instead of f(args...).¹
I've rarely used pointers to member functions, to be honest, but on cppreference I see that for a pointer to a unary member function f of a class C, being c such an object of class C, the call syntax would be (c.*p)(x), being x the argument other than this/c.
This means that if you don't want to use std::invoke, you'd have to extract the first element of args... and the rest of them and pass them like this: (first_of_args.*f)(rest_of_args...). While retrieving first_of_args is relatively easy (std::get<0>(std::forward_as_tuple(args...));), obtaining the pack rest_of_args requires some meta-programming trick, so I guess std::invoke is just the best solution. (The problem is exacerbated by the fact that you can't pass some_obj_ptr->*some_pointer_to_member_fun around, but you must apply it immediately. If that was not the case, you could think of constructing a std::tuple with all but the first element from std::forward_as_tuple(args...) and then use std::apply to pass them all to first_of_args->*f.)
Actually, an idea just came to my mind: using a generic variadic lambda to destructure args... in first and rest... in order to be able to meet the member function pointer call syntax. In code, you'd change your non-working f(args...) with the following
[&f](auto first, auto... rest){
return (first->*f)(rest...);
}(args...);
But this is, directly or indirectly, what std::invoke would do for you.
¹ As highlighted in a comment, you'd rather forward those args perfectly:
std::invoke(std::forward<callable>(f), std::forward<arguments>(args)...);
|
73,546,220
| 73,547,309
|
What does buffer overrun error in uint32_t mean?
|
I'm trying to write ARGB color data into a uint32_t variable through a for loop. I get an error:
invalid write to m_imagedata[i][1],wriable range is 0 to 0
if (!render){
uint32_t* m_imagedata = nullptr;
}
else{
m_imagedata = new uint32_t(m_viewportheight * m_viewportwidth);
}
for (uint32_t i = 0; i < m_viewportheight * m_viewportwidth; i++) {
m_imagedata[i] = 0xff00ffff;
How can I fix this?
|
You have the syntax for dynamically allocating an array wrong. It should be
m_imagedata = new uint32_t[m_viewportheight * m_viewportwidth];
and then you'd have to delete it using the delete[] form of delete i.e.
delete[] m_imagedata;
However, as others have noted in comments if what you want is an array that can be dynamically sized at runtime, you should use the one that is included in the C++ standard library: std::vector<T>. This way you do not need a loop to initialize each item to some value and you don't have to worry about deleting it yourself. If your m_imagedata member variable is a std::vector<uint32_t> you can initialize it in the constructor of your class like:
class my_image_class {
//...
std::vector<uint32_t> m_imagedata;
public:
my_image_class(int wd, int hgt) :
m_imagedata(wd * hgt, 0xff00ffff)
{}
//...
}
|
73,547,958
| 73,548,957
|
Binding const rvalues reference to non-const rvalues?
|
The following quotes are needed in the question:
[dcl.init.ref]/5:
5- A reference to type “cv1 T1” is initialized by an expression of
type “cv2 T2” as follows:
(5.1) [..]
(5.2) [..]
(5.3) Otherwise, if the initializer expression
(5.3.1) is an rvalue (but not a bit-field) or function lvalue and “cv1 T1” is reference-compatible with “cv2 T2”, or
(5.3.2) [..]
then the value of the initializer expression in the first case and the result of the conversion in the second case is called the converted initializer. If the converted initializer is a prvalue, its type T4 is adjusted to type “cv1 T4” ([conv.qual]) and the temporary materialization conversion ([conv.rval]) is applied. In any case, the reference is bound to the resulting glvalue (or to an appropriate base class subobject).
(emphasis mine)
[expr.type]/2:
If a prvalue initially has the type “cv T”, where T is a cv-unqualified non-class, non-array type, the type of the expression is adjusted to T prior to any further analysis.
Consider the following example:
const int&& r1 = 0;
Taking cv1 T1 as const int, and cv2 T2 as int
It's clear that bullet [dcl.init.ref]/(5.3.1) is applied here. The initializer expression is an rvalue (prvalue); and cv1 T1 (const int) is reference-compatible with cv2 T2 (int). And since that the converted initializer is prvalue, its type T4 (int) is adjusted to cv1 T4 (const int). Then, temporary materialization is applied.
But, per [expr.type]/2, before applying temporary materialization conversion, cv1 T4 (const int) becomes int again. Then, by applying temporary materialization, we've got an xvalue denoting an object of type int. Then the reference is bound to the resulting glvalue.
Here's my first question. The reference r1 is reference to const int and the resulting glvalue is an xvalue denoting an object of type int. So how r1, which is of type const int&&, is now binding to an xvalue of type int? Is this valid binding? Is any missing wording? Am I misunderstood/missed something?
Consider another last example:
const int&& r2 = static_cast<int&&>(0);
The same wording as above applies: The initializer expression is an rvalue (xvalue) and cv1 T1 (const int) is reference-compatible with cv2 T2 (int). And since that the converted initializer is an xvalue not prvalue, [conv.qual] or even [conv.rval] is not applied (i.e, the condition "If the converted initializer is a prvalue, ..") isn't satisfied.
I know that [conv.rval] isn't needed here since the initializer expression is already an xvalue, but [conv.qual] is required.
And that's my last question. The reference r2 is reference to const int and the resulting glvalue is an xvalue denoting an object of type int. So how r2, which is of type const int&&, is now binding to an xvalue of type int? Is this valid binding? Is any missing wording? Am I misunderstood/missed something?
|
Here's my first question. The reference r1 is reference to const int and the resulting glvalue is an xvalue denoting an object of type int. So how r1, which is of type const int&&, is now binding to an xvalue of type int? Is this valid binding?
Yes, that's what happens. I fail to see the issue here.
I know that [conv.rval] isn't needed here since the initializer expression is already an xvalue, but [conv.qual] is required.
No it isn't. Again, I fail to see the issue.
There is, in fact, no rule that says that a reference of type T&&, can only refer to an object whose type is exactly T. A const int&& can refer to an int object. The concept of reference-compatibility was invented in order to describe what types of objects a reference can refer to.
|
73,548,215
| 73,548,361
|
Handling multiple data representation for modifying data in vectors in C++
|
I have the simple data structure describing 2d point in cartesian coordinate system, like below.
struct CartPoint
{
double x;
double y;
}
and second strucutre, representing 2d point in polar coordinate system
struct PolarPoint
{
double r;
double alpha;
}
and also two functions allowing me to translate from one representation to second one:
void translate(const CartPoint& from, PolarPoint& to) { ... };
void translate(const PolarPoint& from, CartPoint& to) { ... };
I would like to create object (let me call it PointContainer), that allows me to store cartesian 2D points in one vector, but accessing those in either cartesian or polar representation (based on compile-time decision). I was thinking of a class exposing two types of non-const iterators, one for every representation. However I could not find anywhere such a solution and i am not sure whether it is a good idea. I would like to use it like that:
void fillVectorWithCartPts(std::vector<CartPoint>& points)
{
// fills points-vector with 2d cartesian points
...
};
int main()
{
std::vector<CartPoint> pts{};
fillVectorWithCartPts(pts);
PointContainer pc{pts};
// dummy logic representing use possibilities
for (CartPoint& _pt : pc.GetIterator<CartPoint>())
{
_pt = CartPoint{1.0, 2.0} // modifies points in data via cartesian representation
}
// or
for (PolarPoint& _pt : pc.GetIterator<PolarPoint>())
{
_pt = PolarPoint{3.0, 4.0} // modifies points in data via polar representation
}
// after modification i can retrive vector in selected representation
std::vector<PolarPoint> polarRes = pc.Retrive<PolarPoint>();
std::vector<CartPoint> cartRes = pc.Retrive<CartPoint>();
return 0;
}
I will be very grateful for any suggestion on design of such a class or proposing other solutions to solve the issue of double representation need for the same data.
|
It's likely the most idiomatic to create an intermediary class that can be viewed as either CartPoint or PolarPoint. You can do this multiple ways: containment and getters, or inheritance. Then you can store a vector of these points.
CartPoint toCartPoint(const PolarPoint& pp) { ... }
CartPoint toPolarPoint(const CartPoint& cp) { ... }
class GenericPoint {
public:
GenericPoint(CartPoint cp)
: cp(cp), pp(toPolarPoint(cp)) {}
GenericPoint(PolarPoint pp)
: cp(toCartPoint(pp)), pp(pp) {}
GenericPoint(const GenericPoint& gp)
: cp(gp.cp), pp(gp.pp) {}
operator const CartPoint&() const { return cp; }
operator const PolarPoint&() const { return pp; }
const CartPoint& getCartPoint() const { return cp; }
const PolarPoint& getPolarPoint() const { return pp; }
GenericPoint& operator=(const GenericPoint& rhs)
{
cp = rhs.cp;
pp = rhs.pp;
}
GenericPoint& operator=(const CartPoint& cp_in)
{
cp = cp_in;
pp = toPolarPoint(cp_in);
}
GenericPoint& operator=(const PolarPoint& pp_in)
{
cp = toCartPoint(pp_in);
pp = pp_in;
}
private:
CartPoint cp;
PolarPoint pp;
};
|
73,549,126
| 73,549,288
|
Why does heterogenous version of erase for associative containers take in forwarding reference?
|
Is there any particular reason why heterogeneous version of erase in associative containers (std::map, std::unordered_map, std::multimap and std::unordered_multimap) takes in a forwarding reference and all other heterogeneous functions (find/equal_range, count, contains) take in const-reference?
For instance in case of std::unordered_map:
template<class K> iterator find(const K& k);
template<class K> const_iterator find(const K& k) const;
template<class K> size_type erase(K&& x);
https://en.cppreference.com/w/cpp/container/unordered_map/erase (overload 4)
https://en.cppreference.com/w/cpp/container/unordered_map/find (overloads 3/4)
Section 24.5.4.1 of (lastest working draft) n4910
(This, as mentioned, applies to other containers as well.)
|
After doing a bit more research I've found the original proposal p2077 which explains it (paragraphs 2 and 3.1).
If the overload
template <class K>
size_type erase( const K& x );
existed, it would be chosen when passing an object of a type, which is implicitly convertible to either iterator or const_iterator and that is not what users might expect.
Additionally it is not possible to create a valid constraints for the mentioned overload because the value category of K is lost due to template argument deduction from const K& function parameter. Therefore, we cannot define constraints over K for the arbitrary case. To propagate the information about the value category of K we define the function parameter for heterogeneous erasure as forwarding reference.
A more detailed explanation can be found in proposal.
|
73,549,440
| 73,550,183
|
Is there a faster way to calculate the inverse of a given nxn matrix?
|
I'm working on a program that requires calculating the inverse of an 8x8 matrix as fast as possible. Here's the code I wrote:
class matrix
{
public:
int w, h;
std::vector<std::vector<float>> cell;
matrix(int width, int height)
{
w = width;
h = height;
cell.resize(width);
for (int i = 0; i < cell.size(); i++)
{
cell[i].resize(height);
}
}
};
matrix transponseMatrix(matrix M)
{
matrix A(M.h, M.w);
for (int i = 0; i < M.h; i++)
{
for (int j = 0; j < M.w; j++)
{
A.cell[i][j] = M.cell[j][i];
}
}
return A;
}
float getMatrixDeterminant(matrix M)
{
if (M.w != M.h)
{
std::cout << "ERROR! Matrix isn't of nXn type.\n";
return NULL;
}
float determinante = 0;
if (M.w == 1)
{
determinante = M.cell[0][0];
}
if (M.w == 2)
{
determinante = M.cell[0][0] * M.cell[1][1] - M.cell[1][0] * M.cell[0][1];
}
else
{
for (int i = 0; i < M.w; i++)
{
matrix A(M.w - 1, M.h - 1);
int cy = 0;
for (int y = 1; y < M.h; y++)
{
int cx = 0;
for (int x = 0; x < M.w; x++)
{
if (x != i)
{
A.cell[cx][cy] = M.cell[x][y];
cx++;
}
}
cy++;
}
determinante += M.cell[i][0] * pow(-1, i + 0) * getMatrixDeterminant(A);
}
}
return determinante;
}
float getComplementOf(matrix M, int X, int Y)
{
float det;
if (M.w != M.h)
{
std::cout << "ERROR! Matrix isn't of nXn type.\n";
return NULL;
}
if (M.w == 2)
{
det = M.cell[1 - X][1 - Y];
}
else
{
matrix A(M.w - 1, M.h - 1);
int cy = 0;
for (int y = 0; y < M.h; y++)
{
if (y != Y)
{
int cx = 0;
for (int x = 0; x < M.w; x++)
{
if (x != X)
{
A.cell[cx][cy] = M.cell[x][y];
cx++;
}
}
cy++;
}
}
det = getMatrixDeterminant(A);
}
return (pow(-1, X + Y) * det);
}
matrix invertMatrix(matrix M)
{
matrix A(M.w, M.h);
float det = getMatrixDeterminant(M);
if (det == 0)
{
std::cout << "ERROR! Matrix inversion impossible (determinant is equal to 0).\n";
return A;
}
for (int i = 0; i < M.h; i++)
{
for (int j = 0; j < M.w; j++)
{
A.cell[j][i] = getComplementOf(M, j, i) / det;
}
}
A = transponseMatrix(A);
return A;
}
While it does work, it does so way too slowly for my purposes, managing to calculate an 8x8 matrix's inverse about 6 times per second.
I've tried searching for more efficient ways to invert a matrix but was unsuccessfull in finding solutions for matrices of these dimensions.
However I did find conversations in which people claimed that for matrices below 50x50 or even 1000x1000 time shouldn't be a problem, so I was wondering if I have missed something, either a faster method or some unnecessary calculations in my code.
Does anyone have experience regarding this and/or advice?
Sorry for broken english.
|
Your implementation have problems as others commented on the question. The largest bottleneck is the algorithm itself, calculating tons of determinants.(It's O(n!)!)
If you want a simple implementation, just implement Gaussian elimination. See finding the inverse of a matrix and the pseudo code at Wikipedia. It'll perform fast enough for small sizes such as 8x8.
If you want a complex but more efficient implementation, use a library that is optimized for LU decomposition(Gaussian elimination), QR decomposition, etc.(Such as LAPACK or OpenCV.)
|
73,550,037
| 73,550,155
|
Finding max value in a array
|
I'm doing a program that finds the max value in a array. I done it but I found a strange bug.
#include<iostream>
using namespace std;
int main() {
int n; //input number of elements in
cin >> n;
int arr[n];
for (int i = 0; i < n; i++) {
cin >> arr[i]; //input array's elements
} int max_value = arr[0];
for (int i = 1; i <= n; i++) {
if (arr[i] > max_value) {
max_value = arr[i];
}
} cout << max_value;
return 0;
}
When I put 5 as first line for the number of elements and 2, 7, 6, 8, 9 as the elements of the array. It returns 16 instead of 9. Please help
|
In Arrays the first index starts with 0 and ends in n - 1 assuming the array is of length n
so when looping from i = 1 to i <= n. n is now larger than n - 1.
the solution would be to start from 0 and end at i < n hence:
#include<iostream>
using namespace std;
int main() {
int n; //input number of elements in
cin >> n;
int arr[n];
for (int i = 0; i < n; i++) {
cin >> arr[i]; //input array's elements
} int max_value = arr[0];
for (int i = 0; i < n; i++) {
if (arr[i] > max_value) {
max_value = arr[i];
}
}
cout << max_value;
return 0;
}
you could also use the std::max function like so:
for(int i = 0; i < n; i ++) {
max_value = max(max_value, arr[i]);
}
|
73,550,141
| 73,560,318
|
Unexplained profiling data from llvm when compiling simple C++ program
|
When profiling the following main.cpp file, I am getting "0" for the "test1 complete" line indicating the line has not been executed when I expect to get "1"
main.cpp
#include <cassert>
#include <iostream>
using namespace std;
void test1() {
cout << "test1 start" << endl;
string pdx;
assert(pdx == "");
cout << "test1 complete" << endl;
}
int main() {
test1();
cout << "Done." << endl;
}
To profile the code, the script I am using is:
#!/bin/bash
clang++ -g -std=c++11 -fprofile-instr-generate -fcoverage-mapping main.cpp
# Execute the program
./a.out
llvm-profdata merge default.profraw -output=merged.profraw
llvm-cov report -show-functions=1 ./a.out -instr-profile=merged.profraw main.cpp
llvm-cov show ./a.out -instr-profile=merged.profraw
rm a.out *.profraw
# $ clang++ --version
# clang version 13.0.1 (Red Hat 13.0.1-2.module+el8.6.0+987+d36ea6a1)
# Target: x86_64-unknown-linux-gnu
# Thread model: posix
# InstalledDir: /usr/bin
If I get rid of the following lines, I get the expected result
string pdx;
assert(pdx == "");
Output from profiling main.cpp
$ ./check-code-coverage.sh
test1 start
test1 complete
Done.
File '/home/cssuwbstudent/pisan/bitbucket/pisan342/check-overage/main.cpp':
Name Regions Miss Cover Lines Miss Cover Branches Miss Cover
-------------------------------------------------------------------------------------------------------
_Z5test1v 1 0 100.00% 6 1 83.33% 0 0 0.00%
main 1 0 100.00% 4 0 100.00% 0 0 0.00%
-------------------------------------------------------------------------------------------------------
TOTAL 2 0 100.00% 10 1 90.00% 0 0 0.00%
1| |#include <cassert>
2| |#include <iostream>
3| |
4| |using namespace std;
5| |
6| 1|void test1() {
7| 1| cout << "test1 start" << endl;
8| 1| string pdx;
9| 1| assert(pdx == "");
10| 0| cout << "test1 complete" << endl;
11| 1|}
12| |
13| 1|int main() {
14| 1| test1();
15| 1| cout << "Done." << endl;
16| 1|}
$
Any insight would be much appreciated.
|
This is a bug, and should be reported to the people who make clang.
|
73,550,146
| 73,556,379
|
Why can't constexpr be used for non-const variables when the function only uses the types?
|
Maybe the title is not clear, so concretely:
#include <type_traits>
template<typename T>
constexpr int test(T)
{
return std::is_integral<T>::value;
}
int main()
{
constexpr int a = test(1); // right
constexpr int b = test(1.0); // right
int c = 2;
constexpr int d = test(c); // ERROR!
return 0;
}
In fact, the function doesn't use anything but the type of the parameter, which can be determined obviously in the compilation time. So why is that forbidden and is there any way to make constexpr get the value when only the type of parameter is used?
In fact, I hope to let users call the function through parameters directly rather than code like test<decltype(b)>, which is a feasible but not-convenient-to-use way, to check if the types of parameters obey some rules.
|
Just take T by reference so it doesn't need to read the value:
template<typename T>
constexpr int test(T&&)
{
return std::is_integral<std::remove_cvref_t<T>>::value;
}
You can even declare test with consteval, if you want to.
(Note that stripping cv-qualifiers isn't necessary in this instance; cv-qualified integral types satisfy the std::is_integral trait.)
|
73,550,199
| 73,550,276
|
How to catch a base class constructor's exception in C++?
|
I have a class that is derived from another class. I want to be able to catch and re-throw the exception(s) thrown by the derived class's constructor in my derived class's constructor.
There seems to be a solution for this in C#:
https://stackoverflow.com/a/18795956/13147242
But since there is no such keyword as C#'s baseI have no idea how and if this is possible in C++.
Is there a solution? This would enable me to reuse quite a lot of code.
|
There is a pretty good example of this here: Exception is caught in a constructor try block, and handled, but still gets rethrown a second time
Ultimately, if you manage to catch an exception in the derived class the only thing you should do is to either rethrow that exception or throw a new one. This is because if the base class constructor does not complete then the derived class will be in an undefined state and you should not use it.
|
73,550,443
| 73,550,472
|
why we can't initiaize data member in a member function using list initializer?
|
why assignment is working, but initialization not working ?
please explain what is happening in these 3 cases in code.
#include <iostream>
class Fun {
public:
void set_data_variable()
{
y {2}; // case 1 : this doesn't work
// y = 2; // case 2 : this does work
// y = {2}; // case 3: this also work
std::cout << y << "\n";
}
private:
int y = 6;
};
int main()
{
Fun f;
while (1) {
f.set_data_variable();
}
}
|
For intrinsic type (And also user defined classes) this syntax:
y {2};
Is only valid during object construction. The object y has already been constructed. It happened in the compiler generated function Fun() (The default constructor of the Fun class).
y = 2; and y = {2}; work because they are assignment operations.
|
73,550,449
| 73,550,535
|
C++ Default Argument using Default Constructor to instantiate
|
Is it possible to have a function that takes a reference to an argument that has a default, where the default is instantiated using its default constructor?
For example:
void g(Foo &f = Foo()) {}
This does not work, but I feel that it conveys my intention quite clearly.
|
References to non-const cannot bind to temporary values, and Foo() is a temporary Foo object.
If you don't need to modify the passed object, then you can make f a reference to const:
void g(const Foo& f = Foo()) {}
If you do modify the passed reference, then using a temporary doesn't make much sense, since you'll would be modifying an object that would go out of scope as soon as the function returns.
|
73,550,544
| 73,550,728
|
How to use regex_iterator with const char/wchar_t*?
|
How to use the regex_iterator on data types of const char*/wchar_t*?
From docs:
using cregex_iterator = regex_iterator<const char*>;
using wcregex_iterator = regex_iterator<const wchar_t*>;
using sregex_iterator = regex_iteratorstring::const_iterator;
using wsregex_iterator = regex_iteratorwstring::const_iterator;
Reproducible example:
#include <string>
#include <iostream>
#include <regex>
template <typename T>
auto RegexMatchAll(const T* str, const T* regex)
{
using iter = std::regex_iterator<T>;
std::basic_regex<T> re(regex);
auto words_begin = iter(str, (str + std::wcslen(str)), re); // <--- how to iterate according to the data in this case?
auto words_end = iter();
std::vector<std::match_results<iter>> result;
for (iter i = words_begin; i != words_end; ++i) {
std::match_results<iter> m = *i;
result.emplace_back(m);
}
return result;
}
int main() {
auto matches = RegexMatchAll(L"10,20,30", LR"((\d+))");
}
|
You have a few errors. Assuming you pass a const char* or const wchar_t*, T will be char or wchar_t, so:
std::regex_iterator expects an iterator type, which neither char nor wchar_t are. You want to pass it const T* instead.
*i will yeild a std::match_results<const T*> as well, so that's the type you should store in your result vector.
std::wcslen is only good for T = wchar_t, you need to use std::strlen when T = char. I would create an overloaded wrapper function that calls the appropriate standard library function.
Putting that all together, something like this should work:
std::size_t str_len(const char* s) { return std::strlen(s); }
std::size_t str_len(const wchar_t* s) { return std::wcslen(s); }
template <typename T>
auto RegexMatchAll(const T* str, const T* regex)
{
using iter = std::regex_iterator<const T*>;
std::basic_regex<T> re(regex);
auto words_begin = iter(str, str + str_len(str), re);
auto words_end = iter();
return std::vector<std::match_results<const T*>>(words_begin, words_end);
}
Demo
Note: I also simplified the std::vector construction to use its iterator pair constructor.
|
73,551,092
| 73,551,209
|
Change Nlohmann json key-value with increment function? c++
|
Instead of this:
j["skills"]["atk"] = j["skills"]["atk"].get<int>() + 6;
I want something like this:
void json_inc(json ref, int value) //function to increment int key-values
{
ref = ref.get<int>() + value;
}
json_inc(j["skills"]["atk"], 6); //adds 6
Is this possible for nested json objects like above?
|
You are passing your json object by value. This means, that the object will, in fact, be duplicated in memory and whatever changes you make to the object will ceased to exist once you leave the scope of the function.
To actually manipulate the variable you are passing (and this works for any kind of C++ variable), you need to pass by reference.
This is done by changing the prototype of your function to void json_inc(json &ref, int value).
The & operator means 'reference' in C++. Under the hood, this reference is an immutable pointer to the variable you are passing, performing any changes you do directly on the variable you are passing instead of a copy.
For larger structures, it is just about almost desirable to pass by reference, because it eliminates the need to copy the object on the stack (after all, only the original variable needs to exist).
If you should venture into C at some point (where references do not exist), you can achieve the same thing with pointers:
void json_inc(json *ref, int value) //function to increment int key-values
{
*ref = ref->get<int>() + value;
}
json_inc(&j["skills"]["atk"], 6); //adds 6
|
73,551,110
| 73,551,673
|
What is the use of throw() after a function declaration?
|
Throw is defined as a C++ expression in https://en.cppreference.com/w/cpp/language/throw. Syntactically, it is followed by an exception class name. For example:
int a = 1, b = 0;
if (b==0){
string m ="Divided by zero";
throw MyException(m); //MyException is a class that inherit std::exception class
}
However, I have seen other syntaxes with throw that I don't quite understand:
void MyFunction(int i) throw(); // how can we have an expression following a function definition?
or within a custom exception class, we also have:
class MyException : public std::exception
{
public:
MyException( const std::string m)
: m_( m )
{}
virtual ~MyException() throw(){}; // what is throw() in this case?
const char* what() const throw() { // what are the parentheses called?
cout<<"MyException in ";
return m_.c_str();
}
private:
std::string m_;
};
Therefore, my questions are:
Is there a common syntax rule that allows an expression followed by a function definition?
Why do we have parenthesis following an expression throw? What are they called in C++?
|
throw() is not a throw expression. It is a completely independent syntax construct that just happens to look the same (C++ likes to reuse keywords for multiple purposes instead of reserving more identifiers). The position where throw() is used in your examples is not a position where the grammar of the language expects an expression, therefore it can't be a throw expression. Additionally () is not a valid expression (as would be required after throw in a throw expression) either.
It is the syntax to declare a non-throwing dynamic exception specification for a function, meaning it declares that the function does not throw any exceptions.
The syntax has been deprecated since C++11 and has finally completely been removed from the language with C++20. Therefore it shouldn't be used anymore.
The exact mechanism of the dynamic exception specification has no replacement, but specifically the non-throwing declaration throw() has been superseded by the noexcept specifier (since C++11). So all uses of throw() on a function declaration in old code should be updated to noexcept instead.
|
73,551,252
| 73,551,378
|
No matching default constructor for static unique_ptr
|
I'm trying to use my custom functions for unique_ptr deletion. But the compiler gives me errors. Somehow I suspect that this is because my deleter is not a class type. But I swear it worked before with functions, I used to plug in C-functions just fine. What is the real deal?
Demo
#include <cstdio>
#include <memory>
#include <utility>
using cTYPE = struct {
int a;
};
void delete_wrapper(cTYPE* ptr)
{
return free(ptr);
}
cTYPE* new_wrapper()
{
return static_cast<cTYPE*>(malloc(sizeof(cTYPE)));
}
int main()
{
auto foo = []() {
using tmp_storage_type = std::unique_ptr<cTYPE, decltype(delete_wrapper)>;
static tmp_storage_type obj;
obj = tmp_storage_type{ new_wrapper(), delete_wrapper };
};
foo();
}
Errors (amongst others, clang gives better errors here):
<source>:23:33: note: in instantiation of template class 'std::unique_ptr<cTYPE, void (cTYPE *)>' requested here
static tmp_storage_type obj;
^
<source>:24:15: error: no matching constructor for initialization of 'tmp_storage_type' (aka 'unique_ptr<cTYPE, void (cTYPE *)>')
obj = tmp_storage_type{ new_wrapper(), delete_wrapper };
|
You can't use a function type as template argument for the Deleter template parameter of std::unique_ptr. Use a function pointer instead:
using tmp_storage_type = std::unique_ptr<cTYPE, decltype(&delete_wrapper)>;
Because this would come with the potential pitfall that the value-initialized state of the deleter would be a null pointer, which causes undefined behavior when accidentally called, the default constructor of std::unique_ptr is however removed if a pointer type is used and therefore
static tmp_storage_type obj;
without initializing with a deleter will still not work.
It would be better to use a simple function object:
struct FreeDeleter {
void operator()(void* p) const noexcept { std::free(p); }
};
using tmp_storage_type = std::unique_ptr<cTYPE, FreeDeleter>;
It also works for all types of pointers allocated with malloc and can be used with a default-constructed std::unique_ptr.
Also beware that using malloc/free this way only works because cTYPE and all of its subobjects happen to be of implicit-lifetime type. If that wasn't the case using malloc instead of new would result in undefined behavior sooner or later.
Also beware that malloc and free are not guaranteed to be imported into the global namespace scope with the includes you are using. First of all because these functions are declared in <cstdlib> which you don't include and secondly because even including that they may be declared only in std::. So you should qualify them with std::.
Also, writing using cTYPE = struct { is really just a weird way of writing struct cTYPE {, except that the former comes with additional restrictions with regards to what the class can contain and how linkage is applied to it.
|
73,551,651
| 73,551,896
|
C++ EIGEN: How to create triangular matrix map from a vector?
|
I would like to use data stored into an Eigen (https://eigen.tuxfamily.org) vector
Eigen::Vector<double, 6> vec({1,2,3,4,5,6});
as if they were a triangular matrix
1 2 3
0 4 5
0 0 6
I know how to do it for a full matrix using Eigen's Map
Eigen::Vector<double, 9> vec({1,2,3,4,5,6,7,8,9});
std::cout << Eigen::Map<Eigen::Matrix<double, 3, 3, RowMajor>>(vec.data());
which produces
1 2 3
4 5 6
7 8 9
However I do not know how to make a Map to a triangular matrix.
Is it possible?
Thanks!
[Edited for clarity]
|
In my opinion this cannot be done using Map only: The implementation of Map as it is relies on stride sizes that remain constant no matter their index positions, see https://eigen.tuxfamily.org/dox/classEigen_1_1Stride.html.
To implement a triangular matrix map you would have to have a Map that changes its inner stride depending on the actual column number. The interfaces in Eigen do not allow that at the moment, see https://eigen.tuxfamily.org/dox/Map_8h_source.html.
But if you are just concerned about the extra memory you can just use Eigen's sparse matrix representation:
https://eigen.tuxfamily.org/dox/group__TutorialSparse.html
(Refer to section "Filling a sparse matrix".)
|
73,552,084
| 73,552,457
|
Is returning a reference to a pointer in a class defined behaviour?
|
Consider
#include <iostream>
struct Foo
{
int* n;
Foo(){n = new int{};}
~Foo(){delete n;}
int& get()
{
int* m = n;
return *m;
}
};
int main()
{
Foo f;
std::cout << f.get();
}
This is a cut-down version of a class that manages a pointer, and has a method that returns a reference to the dereferenced pointer.
Is that defined behaviour?
|
Is that defined behaviour?
Yes, the given program is well-formed. You're returning a non-const lvalue reference that refers to a dynamically allocated integer pointed by the pointer n and m. The integer object still exists after the call f.get(). That is, itis not a function local variable.
Note also that just returning a reference to a potentially local variable is not undefined behavior in itself. It's just that if you were to use that returned reference(aka dangling reference) to a local variable that nolonger exists, then we will get UB.
|
73,552,906
| 73,553,217
|
Template argument deduction with designated initializers
|
I have the following peace of example code:
#include <array>
template<std::size_t N>
struct Cfg {
std::array<int, N> Values;
// ... more fields
}
constexpr Cfg c = {
.Values = { 0, 1, 2, 3 }
};
Is it possible to deduce the template parameter N from the designated initializers? Using makeArray or initialize it with a defined array doesn't work either. Doing Cfg<4> for example works.
|
The designated initializer is not relevant here. For the purpose of class template argument deduction it is just handled as if it was a usual element of the initializer list.
(At least that is how I read the standard. It is a bit unclear in [over.match.class.deduct] how a designated initializer ought to be handled when actually performing the overload resolution over the set of deduction guides.)
The problem is that there is no implicit deduction guide to deduce from the nested (untyped) braces, which can only deduce std::initializer_list and (reference to) array types, but your member is std::array. Or to be more precise the implicit aggregate deduction guide will not be formed under these circumstances.
Both
constexpr Cfg c = {
.Values = { 0, 1, 2, 3 }
};
and
constexpr Cfg c = {
{ 0, 1, 2, 3 }
};
will fail, but both
constexpr Cfg c = {
.Values = std::array{ 0, 1, 2, 3 }
};
and
constexpr Cfg c = {
std::array{ 0, 1, 2, 3 }
};
will work since you now gave the initializer a type that the implicit aggregate deduction guide (since C++20) can deduce against the std::array member.
To make the syntax without explicit std::array work you will need to add a deduction guide that can deduce the size of the nested braced-init list, for example:
template<typename T, std::size_t N>
Cfg(const T(&)[N]) -> Cfg<N>;
MSVC and Clang do not accept this though, I am not sure why.
|
73,553,727
| 73,554,010
|
Comparator function for upper_bound or lower_bound for vector of vector
|
How(is it possible) to write a compartor function for upper_bound for a vector of vector such that it compares all indexes of inner vector and get a element in which corresponding element are larger
For example for a given arr
vector<vector<int>>arr={{0,1,1},{0,1,2},{0,2,1},{1,2,3},{4,1,2},{4,3,2}}
the upper bound of {0,1,1} should give {1,2,3} i.e. all corresponding element should be larger.
|
I will assume that the predicate you have in mind is "all values in the inner vectors must be greater".
upper_bound requires a vector of data that is partitioned with respect to the given sample.
In the given example, the data vector is not partitioned w.r.t. {0,1,1} (and the assumed predicated), hence, you cannot even use upper_bound.
The answer is, therefore: The problem and the proposed solution are incompatible. Stated differently:
is it possible
No.
|
73,553,856
| 73,558,241
|
What's the best way to pass an empty string as argument to boost::program_options?
|
I have a program that use boost::program_options to parse a command line. One of the parameter is the name of an AMQP exchange, and a default value is provided. For testing purposes, I would like to override this AMQP exchange name with an empty string (to use a default exchange).
I don't know how to pass an empty string to boost::program_options. Is that possible? Without modifying the source code? If not, what do you recommend?
Here is a minimum working example code :
#include <boost/program_options.hpp>
namespace po = boost::program_options;
constexpr auto EXECUTABLE_DESCRIPTION =
"Minimum working example showing my difficulties while trying to "
"pass an empty string as an argument to boost::program_options.";
int main(int argc, char **argv) {
std::string elkExchange;
po::options_description config("");
config.add_options()
("amqp.exchange",
po::value<std::string>(&elkExchange)
->default_value("default-exchange"), // default value used in prod
"Exchange on which to send the events");
// do the parsing
po::options_description cmdline_options(std::string{EXECUTABLE_DESCRIPTION});
cmdline_options.add(config);
po::variables_map args;
store(po::command_line_parser(argc, argv).
options(cmdline_options).run(), args);
notify(args);
// debug display
std::cout << "Send event on elk.exchange: {" << elkExchange << "}" << std::endl;
// real application code here ...
return EXIT_SUCCESS;
}
Here is what I would like to do:
# default value is used as expected
$ ./boost/empty-string-in-cli/exec
Send event on elk.exchange: {default-exchange}
# override with non-empty value work as expected
$ ./boost/empty-string-in-cli/exec --amqp.exchange='custom'
Send event on elk.exchange: {custom}
# here is where the troubles begin
$ ./boost/empty-string-in-cli/exec --amqp.exchange=''
terminate called after throwing an instance of 'boost::wrapexcept<boost::program_options::invalid_command_line_syntax>'
what(): the argument for option '--amqp.exchange' should follow immediately after the equal sign
Aborted (core dumped)
Thanks in advance : )
|
To my surprise, the adjacent long option parser absolutely forbids the = sign with a zerolength value, see the code on https://github.com/boostorg/program_options/blob/develop/src/cmdline.cpp#L520.
Therefore it seems your only recourse is to avoid using the adjacent-value syntax:
for EXCHANGE in foo bar ''; do ./build/sotest --amqp.exchange "$EXCHANGE"; done
Prints
Send event on elk.exchange: {foo}
Send event on elk.exchange: {bar}
Send event on elk.exchange: {}
|
73,554,015
| 73,587,197
|
confusion about gluProject and OpenGL clipping
|
Consider the fixed transformation pipeline of OpenGL, with the following parameters:
GL_MODELVIEW_MATRIX
0,175,303.109,0,688.503,-2741.84,1583,0,29.3148,5.52094,-3.18752,0,-87.4871,731.309,-1576.92,1
GL_PROJECTION_MATRIX
2.09928,0,0,0,0,3.73205,0,0,0,0,-1.00658,-1,0,0,-43.9314,0
GL_VIEWPORT
0,0,1920,1080
When I draw the faces of the unit cube I get the following:
By looking at the picture, I would expect half of the vertices to have pixel y-coordinate above 1080, and the other half to have a negative y-coordinate.
Instead, with gluProject, all vertices have y > 1080:
model coordinate 0 0 0 -> screen coordinate 848.191 1474.61 0.989359
model coordinate 1 0 0 -> screen coordinate 821.586 1973.88 0.986045
model coordinate 0 1 0 -> screen coordinate -198317 667165 4.61719
model coordinate 1 1 0 -> screen coordinate -2957.48 12504.1 1.07433
model coordinate 0 0 1 -> screen coordinate 885.806 1479.77 0.989388
model coordinate 1 0 1 -> screen coordinate 868.195 1979.01 0.986088
model coordinate 0 1 1 -> screen coordinate -438501 1.39841e+06 8.60228
model coordinate 1 1 1 -> screen coordinate -3191.35 12592.4 1.07507
I could successfully reproduce the gluProject() results with my "custom" calculations.
Why the y-coordinate of all vertices is above 1080?
P.S. To draw the cube I rely on:
glBegin(GL_QUADS);
for(int d = 0; d < 3; ++d)
for(int s = 0; s < 2; ++s)
for(int v = 0; v < 4; ++v)
{
const int a = (v & 1) ^ (v >> 1 & 1);
const int b = v >> 1 & 1;
const int d0 = d;
const int d1 = (d + 1) % 3;
const int d2 = (d + 2) % 3;
double p[3];
p[d] = s;
p[d1] = a;
p[d2] = b;
glColor3dv(p);
glVertex3dv(p);
}
glEnd();
|
I found the answer, in part thanks to this post.
The explanation is that the 4 vertices that have y < 0 in screen space, are also behind the camera, and so have w_clip < 0.
Perspective division (y_clip/w_clip) produces in turn a positive value in device independent coordinates and screen space.
|
73,554,051
| 73,554,109
|
Can I define a constructor from another one?
|
Here's a dummy example to illustrate:
class C
{
// complex class with many fields and methods
// including very expensive:
int computeA();
int computeB();
}
struct S
{
S(int a, int b); // initializes as {a, b, a*b};
// how to define below constructor?
S(const C& c); // Should be equivalent to calling S(c.computeA(), c.computeB())
int a;
int b;
int ab;
}
I'm probably missing something simple but all my attempts are syntactically incorrect.
I can obviously circumvent the problem by using a helper function rather than a constructor, but is there a proper way to define such constructor directly?
And of course I don't want to do this:
S(const C& c); // initialize as {c.computeA(), c.computeB(), c.computeA()*c.computeB()};
because of repeating unnecessarily expensive computations.
|
This code does what you want
S(const C& c) : S(c.computeA(), c.computeB()) {}
It's called a delegating constructor and is available from C++11.
|
73,554,489
| 73,554,592
|
C++ override of nested class function
|
I have a class that derives from a base class, which has a nested class. I want to override a nested class's member function in my derived class and override a base class's member function. (as shown below)
I found a solution (https://stackoverflow.com/a/11448927/13147242) that works by creating another nested class that inherits from the base class's nested class. But when creating an instance of this new nested class I get another return type, which doesn't help me because the function that I want to override that returns a base's nested class object, but should access the nested's override function.
A code example of that setup looks like this:
class Base {
public:
class Nested {
friend class Derived;
public:
virtual void funcNested() { std::cout << "funcNested()" << std::endl; }
};
virtual Nested funcBase() {
std::cout << "funcBase()" << std::endl;
Nested tmp;
tmp.funcNested();
return tmp;
}
};
class Derived : public Base {
public:
// I want to override this function without creating another nested class that derives from Nested, i.e.
// void Base::Nested::funcNested() override { /* ... */ }
// the mentioned solution does it like this:
class DerivedNested : public Base::Nested {
public:
void funcNested() override { std::cout << "override funcNested()" << std::endl; }
};
// but then I can't override this function in a way that the nested function is called
Nested funcBase() override {
std::cout << "override funcBase()" << std::endl;
return DerivedNested();
}
};
int main() {
Derived derived;
Base::Nested nested = derived.funcBase();
nested.funcNested();
return 0;
}
Then the output here is:
override funcBase()
funcNested()
but I want to have:
override funcBase()
override funcNested()
Is this possible?
|
It appears the issue is that in this line Base::Nested nested = derived.funcBase(); a copy of type Nested is created by a Nested::Nested(Nested&) copy constructor, out of derived object (DerivedNested). Then in nested.funcNested(); line a method is called on that copy object of parent class, thus no dynamic dispatch here.
As described in this answer of the similar question, you should use reference/smart pointer to prevent creating a copy of the base class. Like:
#include <memory>
class Base {
public:
class Nested {
friend class Derived;
public:
virtual void funcNested() { std::cout << "funcNested()" << std::endl; }
};
virtual std::shared_ptr<Nested> funcBase() {
std::cout << "funcBase()" << std::endl;
Nested tmp;
tmp.funcNested();
return std::make_shared<Nested>(tmp);
}
};
class Derived : public Base {
public:
class DerivedNested : public Base::Nested {
public:
virtual void funcNested() override { std::cout << "override funcNested()" << std::endl; }
};
std::shared_ptr<Nested> funcBase() override {
std::cout << "override funcBase()" << std::endl;
DerivedNested tmp;
tmp.funcNested();
return std::make_shared<DerivedNested>(tmp);
}
};
int main() {
Derived derived;
auto nested = derived.funcBase();
nested->funcNested();
return 0;
}
|
73,554,574
| 73,564,518
|
Cosine interpolation Rendering as Linear Interpolation
|
I am trying to create different interpolation methods between points.
So far I have got Linear interpolation Down, but my Cosine interpolation seems to be rendering as linear, so I have something very wrong. I am using C++ to code and SFML to render
My linear interpolation function is:
sf::Vector2f game::linearInterpolate(sf::Vector2f a, sf::Vector2f b, float randN) {
return sf::Vector2f(a.x * (1 - randN) + b.x * randN,
a.y * (1 - randN) + b.y * randN);
}
where a is the point to interpolate from and b is the point to interpolate to, and randN is a random number. for those unfamiliar with sfml, the vector2f is just a vector of two floats, which are the coordinates of the pixel to draw
My cosine interpolation function. I think something is wrong here, as it seems to be returning linear interpolation function coordinates, but the math is correct from every resource I have seen(I think)
sf::Vector2f game::cosineInterpolate(sf::Vector2f a, sf::Vector2f b, float randN) {
float ft = randN * 3.1415927f;
float f = (1 - cos(ft)) * 0.5f;
return sf::Vector2f(a.x * (1-f) + b.x * f,
a.y * (1 - f) + b.y * f);
}
which was found from this tutorial:
https://web.archive.org/web/20160530124230/http://freespace.virgin.net/hugo.elias/models/m_perlin.htm
My function to draw the pixels is:
//For each point to interpolate between:
for (sf::Vector2f &l : this->noiseSpots) {
//How many pixels between each point:
for (unsigned long xScreen = 0; xScreen < 1000; xScreen++) {
//random number between 0 and one
float r = this->rand01();
//calculate points to draw
sf::Vector2f lInterpolatedVec = linearInterpolate(l, this->noiseSpots.at(a + 1), r);
sf::Vector2f coInterpolatedVec = cosineInterpolate(l, this->noiseSpots.at(a + 1), r);
//draw linear interpolated graph
this->graph.setPixel(lInterpolatedVec.x, lInterpolatedVec.y, sf::Color(255, 255, 255));
//draw cosine interpolated graph
this->graph.setPixel(coInterpolatedVec.x, coInterpolatedVec.y, sf::Color(255, 0, 0));
}
if (a < fidelity-1) { a++; }else{};
}
noiseSpots is a vector of each spot to interpolate between, and fidelity is the number of those spots.
I think I may need to put my cosine function call outside of the pixel drawing loop but am unsure how to implement this.
PS: before anyone asks, for some reason std::next is not working to access my next element in my vector, so I am using this workaround with a starting out as 0 and increasing each iteration which is why there is that sorta ugly this->noiseSpots.at(a + 1)
An example of my linear interpolation as requested by molbdino:
And an example of my "Cosine" interpolation:
Edit: Ok so I have fully narrowed it down to the cosine interpolation function, as I can get curved lines by replacing 1 or more of the f values with randN, but I still am unsure why I cant get the damn cosine interpolation working
|
Ok so turns out I'm an idiot, LEARN FROM MY MISTAKE BEFORE YOU SPEND DAYS TRYING TO FIX THIS YOURSELF.
DO NOT INTERPOLATE ON BOTH AXIS:
sf::Vector2f game::cosineInterpolate(sf::Vector2f a, sf::Vector2f b, float randN) {
float ft = randN * 3.1415927f;
float f = (1 - cos(ft)) * 0.5f;
return sf::Vector2f(a.x * (1-f) + b.x * f,
a.y * (1 - f) + b.y * f);
}
Should be:
sf::Vector2f game::cosineInterpolate(sf::Vector2f a, sf::Vector2f b, float randN) {
float ft = randN * 3.1415927f;
float f = (1 - cos(ft)) * 0.5f;
return sf::Vector2f(a.x * (1-randN) + b.x * randN,
a.y * (1 - f) + b.y * f);
}
if you do that on the x axis not the Y you get cosine waves in the opposite direction. If you do BOTH (like I was), the waves cancel out and you just get a straight line.
|
73,554,630
| 73,567,018
|
ARCore's ArFrame_acquireCameraImage method returns green image
|
I'm trying to get image from camera using ARCore.
I'm calling ArFrame_acquireCameraImage, which returns image with YUV_420_888 format. I also checked it using ArImage_getFormat method.
It returns me 640x480 image. Then I obtain pixel stride for U plane to distinguish images with NV21 or YV12 format.
Then I combine Y, U, V arrays into single one using memcpy, encode it to Base64 (using function by J. Malinen) and print it to log.
Also I tried to perform YUV420p -> RGBA conversion using RenderScript Intrinsics Replacement Toolkit.
I have this code:
LOGD("take frame");
ArImage *image = nullptr;
if (mArSession != nullptr && mArFrame != nullptr &&
ArFrame_acquireCameraImage(mArSession, mArFrame, &image) == AR_SUCCESS) {
const uint8_t *y;
const uint8_t *u;
const uint8_t *v;
int planesCount = 0;
ArImage_getNumberOfPlanes(mArSession, image, &planesCount);
LOGD("%i", planesCount);
int yLength, uLength, vLength;
ArImage_getPlaneData(mArSession, image, 0, &y, &yLength);
ArImage_getPlaneData(mArSession, image, 1, &u, &uLength);
ArImage_getPlaneData(mArSession, image, 2, &v, &vLength);
auto *yuv420 = new uint8_t[yLength + uLength + vLength];
memcpy(yuv420, y, yLength);
memcpy(yuv420 + yLength, u, uLength);
memcpy(yuv420 + yLength + uLength, v, vLength);
int width, height, stride;
ArImage_getWidth(mArSession, image, &width);
ArImage_getHeight(mArSession, image, &height);
ArImage_getPlanePixelStride(mArSession, image, 1, &stride);
//auto *argb8888 = new uint8_t[width * height * 4];
renderscript::RenderScriptToolkit::YuvFormat format = renderscript::RenderScriptToolkit::YuvFormat::YV12;
if(stride != 1) {
format = renderscript::RenderScriptToolkit::YuvFormat::NV21;
}
LOGD("%i %i %i", width, height, format);
/*renderscript::RenderScriptToolkit toolkit;
toolkit.yuvToRgb(yuv420, argb8888, width, height, format);*/
LOGD("%s", base64_encode(yuv420, yLength + uLength + vLength).c_str());
// delete[](argb8888);
delete[](yuv420);
}
if (image != nullptr) {
ArImage_release(image);
}
Full code in repo.
My phone is Xiaomi Mi A3. Also tried to run this on emulator, but it still gives me same picture.
Actual image should look like this:
However, my code prints this image (I decoded it using RAW Pixels):
Decoding parameters:
If I uncomment code for YUV420 -> ARGB conversion and print Base64 for argb8888 array, I will have this image:
Preset: RGB32, width: 640, height: 480.
Base64 of this image.
|
I replaced RenderScript Intrinsics Replacement Toolkit (which have multithreading and SIMD) with code taken from TensorFlow.
I see this advantages:
It's simpler.
Here's attempt to use RSIRT:
auto *yuv420 = new uint8_t[yLength + uLength + vLength];
memcpy(yuv420, y, yLength);
memcpy(yuv420 + yLength, u, uLength);
memcpy(yuv420 + yLength + uLength, v, vLength);
renderscript::RenderScriptToolkit::YuvFormat format =
renderscript::RenderScriptToolkit::YuvFormat::YV12;
if(stride != 1) {
format = renderscript::RenderScriptToolkit::YuvFormat::NV21;
}
renderscript::RenderScriptToolkit toolkit;
toolkit.yuvToRgb(yuv420, argb8888, width, height, format);
It's line that I wrote to use TensorFlow code:
ConvertYUV420ToARGB8888(y, u, v, argb8888, width, height, yStride, uvStride, uvPixelStride);
As you see, RSIRT takes only planar image, while Tensorflow code is written to use image splitted by 3 planes, so you don't need to use memcpy. It's the reason why this decision won't hurt performance.
I found out that raw image is big (1.2Mb), so I shouldn't use Base64 (I think that Logcat just cut my output, so I wasn't able to see image). Now I write image to app cache and take it using adb.
Full code:
ArImage *image = nullptr;
if (mArSession != nullptr && mArFrame != nullptr &&
ArFrame_acquireCameraImage(mArSession, mArFrame, &image) == AR_SUCCESS) {
// It's image with Android YUV 420 format https://developer.android.com/reference/android/graphics/ImageFormat#YUV_420_888
const uint8_t *y;
const uint8_t *u;
const uint8_t *v;
int planesCount = 0;
ArImage_getNumberOfPlanes(mArSession, image, &planesCount);
LOGD("%i", planesCount);
int yLength, uLength, vLength, yStride, uvStride, uvPixelStride;
ArImage_getPlaneData(mArSession, image, 0, &y, &yLength);
ArImage_getPlaneData(mArSession, image, 1, &u, &uLength);
ArImage_getPlaneData(mArSession, image, 2, &v, &vLength);
ArImage_getPlaneRowStride(mArSession, image, 0, &yStride);
ArImage_getPlaneRowStride(mArSession, image, 1, &uvStride);
ArImage_getPlanePixelStride(mArSession, image, 1, &uvPixelStride);
int width, height;
ArImage_getWidth(mArSession, image, &width);
ArImage_getHeight(mArSession, image, &height);
auto *argb8888 = new uint32_t[width * height];
ConvertYUV420ToARGB8888(y, u, v, argb8888, width, height, yStride, uvStride, uvPixelStride);
std::ofstream stream("/data/user/0/{your app package name}/cache/img", std::ios::out | std::ios::binary);
for(int i = 0; i < width * height; i++)
stream.write((char *) &argb8888[i], sizeof(uint32_t));
stream.close();
LOGD("%i %i", width, height);
delete[](argb8888);
}
if (image != nullptr) {
ArImage_release(image);
}
However, I did one another thing to apply Tensorflow yuv2rgb code for my purpose. YUV2RGB inside yuv2rgb.cc have BRGA order, while Android ARGB_8888 have ARGB order. More shortly, in inline YUV2RGB method you need to change this line:
return 0xff000000 | (nR << 16) | (nG << 8) | nB;
to
return 0xff000000 | nB << 16 | nG << 8 | nR;
|
73,554,733
| 73,554,845
|
Syntax for declaring const member function returning a bare function pointer, without typedefs?
|
I have a class that stores a function pointer to some binary function
class ExampleClass
{
private:
double(*_binaryFunction)(double, double);
}
How can I return this pointer in a const "getter"? This is either surprisingly hard to search for or nobody else has asked this. If the function weren't const, the syntax would be
double (*GetBinaryFunction())(double, double);
Where do I put the const qualifier? Like this?
double (*GetBinaryFunction() const)(double, double);
If I wanted to use a typedef, the syntax would be: (right?)
typedef double (*BinaryFunction)(double, double);
BinaryFunction GetBinaryFunction() const;
Not answering this question but still useful: C syntax for functions returning function pointers (does not deal with const member functions)
Also I want to re-iterate that I don't want to return or am dealing with any pointer-to-member.
To address duplicates:
The more precise formulation of this question would be "Where to syntactically put the cv-qualifier in the declaration of a member function returning a function pointer?" The answer turned out to be surprising (in the middle of the declaration, not at the end as is usually the case) and none of the linked questions deal with the question of the const keyword position, only with broader syntax of function returning function pointer.
|
How can I return this pointer in a const "getter"?
Examples:
struct ExampleClass
{
double (*_binaryFunction)(double, double);
double (*GetBinaryFunction() const)(double, double) {
return _binaryFunction;
}
typedef double (*BinaryFunctionTypedef)(double, double);
BinaryFunctionTypedef GetBinaryFunction2() const {
return _binaryFunction;
}
using binaryFunctionUsing = double (*)(double, double);
binaryFunctionUsing GetBinaryFunction3() const {
return _binaryFunction;
}
};
|
73,555,327
| 73,555,762
|
Active member of an union after assignment
|
Suppose sizeof( int ) == sizeof( float ), and I have the following code snippet:
union U{
int i;
float f;
};
U u1, u2;
u1.i = 1; //i is the active member of u1
u2.f = 1.0f; //f is the active member of u2
u1 = u2;
My questions:
Does it have a defined behaviour? If not why?
What is the active member of u1 after the assignment and why?
Which member of u1 can be read from after the assignment without causing an UB and why?
|
Does it have a defined behaviour? If not why?
It has defined behaviour. The assignment copy the value of u2 and for me the value of an union is a designation of the active member (although that part is not represented and so can't be examined but it determines what is UB and what is not) and the value of the active member if there is one.
What is the active member of u1 after the assignment and why?
f, see above.
Which member of u1 can be read from after the assignment without causing an UB and why?
f. In general, only the active member of an union can be read without UB in C++. There is a special rule for union of structs where those struct have a common initial sequence. Note: C is more relaxed and makes implementation defined (and perhaps completely defined) some cases which are undefined in C++ and I may have missed some changes in C++ to make it more compatible with C.
If someone want to look up the standard, I suggest starting with class.copy.assign/13.
|
73,555,398
| 73,556,546
|
Can I prevent the gcc optimizer from delaying memory allocation?
|
I have a program compiled with gcc 11.2, which allocates some RAM memory first (8 GB) on heap (using new), and later fills it with data read out in real-time from an oscilloscope.
uint32_t* buffer = new uint32_t[0x80000000];
for(uint64_t i = 0; i < 0x80000000; ++i) buffer[i] = GetValueFromOscilloscope();
The problem I am facing is that the optimizer skips the allocation on the first line, and dose it on the fly as am I traversing the loop. This slows down the time spent on each iteration of the loop. Because it is important to be as efficient as possible during the loop, I have found a way to force the compiler to allocate the memory before entering the for loop, namely to set all the reserved values to zero:
uint32_t* buffer = new uint32_t[0x80000000]();
My question is: ¿is there a less intrusive way of achieving the same effect without forcing the data to be zero on the first place (apart from switching off the optimization flags)? I just want to force the compiler to reserve the memory at moment of declaration, but I do not care if the reserved values are zero or not.
Thanks in advance!
EDIT1: The evidence I see for knowing that the optimizer delaying the allocation is that the 'gnome-system-monitor' shows a slowly growing RAM memory as I traverse the loop, and only after I finish the loop, it reaches 8 GiB. Whereas if I initialize all the values to zero, it the gnome-system-monitor shows a quick growth up to 8 GiB, and then it starts the loop.
EDIT2: I am using Ubuntu 22.04.1 LTS
|
You seem to be misinterpreting the situation. Virtual memory within a user-space process (heap space in this case) does get allocated “immediately” (possibly after a few system calls that negotiate a larger heap).
However, each page-aligned page-sized chunk of virtual memory that you haven’t touched yet will initially lack a physical page backing. Virtual pages are mapped to physical pages lazily, (only) when the need arises.
That said, the “allocation” you are observing (as part of the first access to the big heap space) is happening a few layers of abstraction below what GCC can directly influence and is handled by your operating system’s paging mechanism.
Side note: Another consequence would be, for example, that allocating a 1 TB chunk of virtual memory on a machine with, say, 128 GB of RAM will appear to work perfectly fine, as long as you never access most of that huge (lazily) allocated space. (There are configuration options that can limit such memory overcommitment if need be.)
When you touch your newly allocated virtual memory pages for the first time, each of them causes a page fault and your CPU ends up in a handler in the kernel because of that. The kernel evaluates the situation and establishes that the access was in fact legit. So it “materializes” the virtual memory page, i.e. picks a physical page to back the virtual page and updates both its bookkeeping data structures and (equally importantly) the hardware page mapping mechanism(s) (e.g. page tables or TLB, depending on architecture). Then the kernel switches back to your userspace process, which will have no clue that all of this just happened. Repeat for each page.
Presumably, the description above is hugely oversimplified. (For example, there can be multiple page sizes to strike a balance between mapping maintenance efficiency and granularity / fragmentation etc.)
A simple and ugly way to ensure that the memory buffer gets its hardware backing would be to find the smallest possible page size on your architecture (which would be 4 kiB on a x86_64, for example, so 1024 of those integers (well, in most cases)) and then touch each (possible) page of that memory beforehand, as in: for (size_t i = 0; i < 0x80000000; i += 1024) buffer[i] = 1;.
There are (of course) more reasonable solutions than that↑; this is just an example to illustrate what’s happening and why.
|
73,555,493
| 73,555,660
|
Compiler error on member initialization of c-style array member (compared with std::array member)
|
I encountered this problem while initializing a member array variable (c-style). Interestingly converting the member into a std::array<> solves the problem.
See below:
struct A {
A(int aa) : a(aa) {}
A(const A &a) = delete;
A &operator=(const A &a) = delete;
private:
int a;
std::vector<int> v;
};
struct B1 {
B1() : a1{{{1}, {2}}} {} // -> compiles OK (gcc 9.2)
private:
std::array<A, 2> a1;
};
struct B2 {
B2()
: a2{{1}, {2}} // -> error: use of deleted function 'A::A(const A&)' (gcc 9.2)
{}
private:
A a2[2];
};
Demo
My question is why is this difference in (compiler) behavior? I was assuming that functionality-wise they are pretty much the same (I understand std::array<> is more of an aggregate instead of a pure container, like vector - I might be wrong though).
Additional observation:
The compiler allows c-style array if I remove the vector<> member in A
GCC 9.5 does not raise any issues. Does it mean it's a GCC bug (I couldn't find anything on the release notes)?
Update:
It is a GCC bug:
Brace initialization of array sometimes fails if no copy constructor (reported in 4.9 and resolved in 9.4)
|
Does it mean it's a GCC bug
Yes, this seems to be a gcc bug that has been fixed in gcc 9.4. Demo.
A bug for the same has been submitted as:
GCC rejects valid program involving initialization of array in member initializer list
|
73,555,606
| 73,555,861
|
std::unordered_set<std::filesystem::path>: compile error on clang and g++ below v.12. Bug or user error?
|
I added the following function template to my project, and a user complained that it wouldn't compile on their system anymore:
template<typename T>
std::size_t removeDuplicates(std::vector<T>& vec)
{
std::unordered_set<T> seen;
auto newEnd = std::remove_if(
vec.begin(), vec.end(), [&seen](const T& value)
{
if (seen.find(value) != std::end(seen))
return true;
seen.insert(value);
return false;
}
);
vec.erase(newEnd, vec.end());
return vec.size();
}
The error message with g++ 9.4 was approximately
error: use of deleted function
'std::unordered_set<_Value, _Hash, _Pred, _Alloc>::unordered_set()
[with
_Value = std::filesystem::__cxx11::path;
_Hash = std::hash<std::filesystem::__cxx11::path>;
_Pred = std::equal_to<std::filesystem::__cxx11::path>;
_Alloc = std::allocator<std::filesystem::__cxx11::path>]'
12 | std::unordered_set<T> seen;
So the error arose when instantiating the above function template with T = std::filesystem::path.
I investigated a little and found that there was no issue when instantiating it with other types, e.g. fundamental types or std::string, but only with std::filesystem::path.
Using Compiler Explorer, I looked at how different compiler versions treat the code and found that only g++ v.12 can compile the instantiation with std::filesystem::path. Any g++ version below 12 fails with the above error. clang produces a similar error (call to implicitly deleted default constructor), even on the newest version (14). I didn't test other compilers.
The workaround I used was substituting std::unordered_set with std::set. Then it works on g++ v.8 and clang v.7 and above.
So I guess the error is a missing hashing function for std::filesystem::path? Or is it an error on my part?
|
The std::hash specialization for std::filesystem::path has only recently been added as resolution of LWG issue 3657 into the standard draft. It hasn't been there in the published C++17 and C++20 standards.
There has always been however a function std::filesystem::hash_value from which you can easily create a function object to pass as hash function to std::unordered_set:
struct PathHash {
auto operator()(const std::filesystem::path& p) const noexcept {
return std::filesystem::hash_value(p);
}
};
//...
std::unordered_set<std::filesystem::path, PathHash> seen;
If you are providing that template without any guarantees that it works on types other than those that have a std::hash specialization defined, then I think there is no problem on your part.
However, if you require the type to be hashable, it would be a good idea to let the user override the hash function used the same way std::unordered_set does. The same also applies to the equality functor used.
|
73,555,760
| 73,555,871
|
Convert void function to std::function<void()>
|
I am having some trouble passing a callback function as an argument to a function.
The function expects the following type:
std::function<void(char* topic, byte* payload, unsigned int length)>
But when I pass this function:
void AMI_MQTT::Callback(char* topic, byte* payload, unsigned int length)
It is not accepted since there is no conversion type from void() to std::function<void()>
My function comes from a class. If I create a separate function in the cpp file (not in the class). It is accepted. Why is my public function from the class not accepted?
|
The problem here is that your function appears to be a non-static member function. Non-static member functions implicitly have additional parameter of an instance of the class they belong to, so the signatures of your std::function and the function you are trying to pass simply don't match. However you still can bind a member function to an instance and make it compatible with the signature of a free function if needed:
AMI_MQTT instance;
std::function<void(char* topic, byte* payload, unsigned int length)> func = std::bind(&AMI_MQTT::Callback, instance, std::placeholders::_1, std::placeholders::_2, std::placeholders::_3);
Alternatively you may want to get use of type erasure of lambdas:
AMI_MQTT instance;
std::function<void(char* topic, byte* payload, unsigned int length)> func = [instance] (char* topic, byte* payload, unsigned int length) {
instance.Callback(topic, payload, length);
};
Be advised, however, that instance inside of a lambda is a copy of the instance outside of it, so they don't share the same state.
A way more concise solution would be just making the non-static member function static, in this case you can use it without any instance bound to it:
struct AMI_MQTT {
static void Callback(char* topic, byte* payload, unsigned int length);
...
}
std::function function = AMI_MQTT::Callback;
|
73,555,847
| 73,556,339
|
Does assigning a vector variable to itself result in copy C++
|
Suppose I have a vector:
std::vector<uint64_t> foo;
foo.push_back(1);
foo.push_back(27);
I pass this vector to a function by reference.
calculate_something(foo);
int calculate_something(std::vector<uint64_t>& vec) {
// ...
}
In rare circumstances, the function needs to locally modify the vector, in which case a copy must be made. Is this the correct way to do that?
if (some_condition) {
vec = vec;
}
vec.push_back(7);
Edit: The reason I am self-assigning is because assigning to another variable results in a copy and my intuition tells me that the same would occur when assigning back to the same variable.
|
No, it is not correct.
Assignment in C++ doesn't create new objects or change what object a reference refers to. Assignment only changes the value of the object to which the left-hand side refers (either through a built-in assignment operator or through the conventional behavior of operator= overloads).
In order to create new objects that persist longer than the evaluation of an expression, you need a declaration of some variable. Such a declaration can have an initializer using = which is often confused for assignment, which it is not:
std::vector<uint64_t> vec2 = vec;
This creates a new object vec2 of type std::vector<uint64_t> and initializes it with vec, which implies copying vec's state into vec2. This is not assignment! If you write instead
vec2 = vec;
then you have assignment which modifies the state of the object named vec2 to be equal to that of the object referred to by vec. But in order to do that there has to be already a declaration for vec2 in which the vector object itself has been created. The assignment is not creating a new object.
If you simply use
vec = vec;
then there is only one object, the one that vec refers to. It is non-obvious whether this assignment is allowed at all, but even in the best case all it could do is copy the state of the object that vec refers to into the object that vec refers to, meaning that at the end the state of vec should simply be unchanged and there is no other side effect.
In general you can't rebind a name or a reference in C++ to a new object.
So what you want is
std::vector<uint64_t> local_vec = vec;
and then you can use local_vec as a local copy of vec. You can avoid having to specify the type by using auto to indicate that you want the variable to have the same type as the right-hand side (minus reference and const qualifiers):
auto local_vec = vec;
|
73,557,265
| 73,557,302
|
Why is the address different?
|
I wrote the below code for my understanding of pointers.
#include <cstdio>
#include <iostream>
using namespace std;
int main() {
string a = "hello";
const char *st = &a[0];
printf("%p\n", &st[0]);
printf("%p\n", &(a[0]));
printf("%c\n", *st);
printf("%p", &a);
}
This is the output that I get
0x1265028
0x1265028
h
0x7ffe26a91c40
If my understanding is correct &a should return the address of string, why is the value returned by &a different than the rest ?
|
A std::string is a C++ object, which internally holds a pointer to an array of chars.
In your code, st is a pointer to the first char in that internal array, while &a is a pointer to the C++ object. They are different things, and therefore the pointer values are also different.
|
73,557,662
| 73,565,214
|
how can I achieve multiple conditional inheritance?
|
I have a type build that has a flag template, and according to the active flag bits, it inherits from those types. this allows me to "build" classes from many subclasses with a great number of configurations:
#include <type_traits>
#include <cstdint>
struct A { void a() {} };
struct B { void b() {} };
struct C { void c() {} };
struct D { void d() {} };
constexpr std::uint8_t FLAG_BIT_A = 0b1 << 0;
constexpr std::uint8_t FLAG_BIT_B = 0b1 << 1;
constexpr std::uint8_t FLAG_BIT_C = 0b1 << 2;
constexpr std::uint8_t FLAG_BIT_D = 0b1 << 3;
struct empty {};
template<std::uint8_t flags>
using flag_a_type = std::conditional_t<(flags & FLAG_BIT_A), A, empty>;
template<std::uint8_t flags>
using flag_b_type = std::conditional_t<(flags & FLAG_BIT_B), B, empty>;
template<std::uint8_t flags>
using flag_c_type = std::conditional_t<(flags & FLAG_BIT_C), C, empty>;
template<std::uint8_t flags>
using flag_d_type = std::conditional_t<(flags & FLAG_BIT_D), D, empty>;
template<std::uint8_t flags>
struct build :
flag_a_type<flags>, flag_b_type<flags>, flag_c_type<flags>, flag_d_type<flags> {
};
int main() {
build<FLAG_BIT_A | FLAG_BIT_C> foo;
}
so build<FLAG_BIT_A | FLAG_BIT_C> should result in a class that inherits from A and from C.
but it doesn't compile, saying empty is already a direct base class:
error C2500: 'build<5>': 'empty' is already a direct base class
how can I achieve this without having to make 4 different empty structs to avoid the clash?
|
Here's another approach using c++20 that provides much more flexibility for bitmask-based inheritance:
#include <concepts>
#include <cstdint>
template <std::integral auto, class...>
struct inherit_mask {};
template <auto flags, class Base, class... Bases>
requires((flags & 1) == 1)
struct inherit_mask<flags, Base, Bases...>
: Base, inherit_mask<(flags >> 1), Bases...> {};
template <auto flags, class Base, class... Bases>
struct inherit_mask<flags, Base, Bases...>
: inherit_mask<(flags >> 1), Bases...> {};
struct A { void a() {} };
struct B { void b() {} };
struct C { void c() {} };
struct D { void d() {} };
template <std::uint8_t flags>
using build = inherit_mask<flags, A, B, C, D>;
using foo = build<0b0101>;
static_assert(std::derived_from<foo, A>);
static_assert(not std::derived_from<foo, B>);
static_assert(std::derived_from<foo, C>);
static_assert(not std::derived_from<foo, D>);
Compiler Explorer
Works on clang, gcc, and msvc, and doesn't lead to exponential instantiation explosion.
|
73,557,680
| 73,557,862
|
c++ : How to pass a normal c function as hash functor for unordered_map
|
I've got this hash function is C style:
static size_t Wang_Jenkins_hash(size_t h) {
h += (h << 15) ^ 0xffffcd7d;
h ^= (h >> 10);
h += (h << 3);
h ^= (h >> 6);
h += (h << 2) + (h << 14);
return h ^ (h >> 16);
}
Then I try to use it for unordered_map's hash parameter. Do I always need to write a c++ functor as a wrapper inorder to use it? Or is there a more convenient ways to pass it as template parameter?
Thanks!
|
Functors are probably the easiest way to do this. When you use a free function the syntax becomes
std::unordered_map<std::size_t, std::size_t, decltype(&Wang_Jenkins_hash)>
but that is only half of what you need. Since we are using a function pointer type we need to pass to the map the function to use, and to do that you have to specify how many initial buckets you want along with the function to actually use. That could be done like
std::unordered_map<std::size_t, std::size_t, decltype(&Wang_Jenkins_hash)> foo(1024, &Wang_Jenkins_hash);
If you instead switch to a functor then code can be changed to
struct Wang_Jenkins_hash
{
std::size_t operator()(std::size_t h) noexcept const
{
h += (h << 15) ^ 0xffffcd7d;
h ^= (h >> 10);
h += (h << 3);
h ^= (h >> 6);
h += (h << 2) + (h << 14);
return h ^ (h >> 16);
}
};
std::unordered_map<std::size_t, std::size_t, Wang_Jenkins_hash> foo;
If you can use C++20 or newer then you can wrap the fucntion in a lambda to use with the map like
auto my_hash = [](auto val) { return Wang_Jenkins_hash(val); };
std::unordered_map<std::size_t, std::size_t, decltype(my_hash)> foo;
|
73,557,743
| 73,557,810
|
How to ensure that inheriting classes are implementing a friend function (ostream)?
|
I'd rather implement a non-friend function and directly mark the function as virtual.
But I'm in a situation where I'd like to ensure that a specific set of classes implement an overloading of
friend std::ostream& operator << (std::ostream& os, MyClass& myClass);
and I found no other way to link it directly to my class. (Other possibility of implementation of operator is moving it outside my class, without friend)
Syntax lords, do I have any way of properly implementing this rule ? (having children overriding << ostream operator)
Edit - Related subjects on following threads:
Overloaded stream insertion operator (<<) in the sub-class
virtual insertion operator overloading for base and derived class.
|
You can create a pure virtual method - that will require inherited classes to overload it, then make friend output operator to call it. For example:
class MyClass {
public:
virtual ~MyClass();
virtual void output( std::ostream &os ) const = 0;
friend std::ostream& operator << (std::ostream& os, const MyClass& myClass)
{
myClass.output( os );
return os;
}
};
|
73,557,931
| 73,558,233
|
Convert C Cast Macro To C++ Function
|
I am working with a C library that uses the usual C "inheritance" trick
typedef struct Base_ *BasePointer;
struct Base_ {
};
typedef struct Derived_ *DerivedPointer;
struct Derived_ {
Base_ header;
};
#define BaseCast(obj) ((BasePointer)(obj))
void bar(BasePointer);
void foo(DerivedPointer derived)
{
bar(BaseCast(derived));
}
I am writing some C++ utilities and one such utility is a safer BaseCast() using type traits to check that the object is indeed derived. This BaseCast() must be completely interchangeable with the macro. I have the following but can this be done better?
// convertible_to_base checks for existence of object->header
// and that object->header is of type Base_
template <typename T>
[[nodiscard]] static inline constexpr BasePointer& BaseCast(T& object) noexcept
{
static_assert(util::convertible_to_base<T>::value, "");
return reinterpret_cast<BasePointer&>(object);
}
template <typename T>
[[nodiscard]] static inline constexpr const BasePointer& BaseCast(const T& object) noexcept
{
static_assert(util::convertible_to_base<T>::value, "");
return reinterpret_cast<const BasePointer&>(object);
}
I don't like the reinterpret_cast(), but I could not get this to work any other way.
|
Post this as answer because discussion in comments got stuck somehow and I don't see why it is not an alternative. Return the pointer by value:
template <typename T>
[[nodiscard]] static inline constexpr BasePointer BaseCast(const T& object) noexcept
{
static_assert(util::convertible_to_base<T>::value, "");
return &object.header;
}
As you mentioned, the reference would be a reference to a temporary, hence the non-const ref would be of little use anyhow.
|
73,558,239
| 73,593,775
|
How to calculate the possible number of sub-string that are palindrome. But substring that are unique?
|
template <class T>
int subPalindrome(T s)
{
int res = 0;
for (int i = 0; i < str.length(); i++)
{
for (int j = 0; (j + i) < str.length() && (i - j) >= 0; j++)
{
if (str[i + j] != str[i - j])
{
break;
}
else res++;
}
}
for (int i = 0; i < str.length(); i++)
{
for (int j = 0; (j + i + 1) < str.length() && i - j >= 0; j++)
{
if (str[i + j + 1] != str[i - j])
{
break;
}
else res++;
}
}
return res;
}
/*I have to write a program that takes a string as input and calculate the possible number of sub-string that are palindrome. But substring that are unique. I have implemented the logic of obtaining the number of all possible substrings but I don't know how to return a count of substrings that are unique
|
int subPalindrome(T s)
{
int n = s.size();
// dp array to store whether a substring is palindrome
// or not using dynamic programming we can solve this
// in O(N^2)
// dp[i][j] will be true (1) if substring (i, j) is
// palindrome else false (0)
vector<vector<bool> > dp(n, vector<bool>(n, false));
for (int i = 0; i < n; i++) {
// base case every char is palindrome
dp[i][i] = 1;
// check for every substring of length 2
if (i < n && s[i] == s[i + 1]) {
dp[i][i + 1] = 1;
}
}
// check every substring of length greater than 2 for
// palindrome
for (int len = 3; len <= n; len++) {
for (int i = 0; i + len - 1 < n; i++) {
if (s[i] == s[i + (len - 1)]
&& dp[i + 1][i + (len - 1) - 1]) {
dp[i][i + (len - 1)] = true;
}
}
}
//*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*
// here we will apply kmp algorithm for substrings
// starting from i = 0 to n-1 when we will find prefix
// and suffix of a substring to be equal and it is
// palindrome we will make dp[i][j] for that suffix to be
// false which means it is already added in the prefix
// and we should not count it anymore.
vector<int> kmp(n, 0);
for (int i = 0; i < n; i++) {
// starting kmp for every i form 0 to n-1
int j = 0, k = 1;
while (k + i < n) {
if (s[j + i] == s[k + i]) {
// make suffix to be false
// if this suffix is palindrome then it is
// already included in prefix
dp[k + i - j][k + i] = false;
kmp[k++] = ++j;
}
else if (j > 0) {
j = kmp[j - 1];
}
else {
kmp[k++] = 0;
}
}
}
//*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*
int count = 0;
for (int i = 0; i < n; i++) {
string str;
for (int j = i; j < n; j++) {
str += s[j];
if (dp[i][j]) {
// count number of resultant distinct
// substrings and print that substring
count++;
}
}
}
return count;
}
|
73,558,352
| 73,558,730
|
operator== idiosyncrasies between std::variant and std::shared_ptr?
|
The automatically instantiated operator== for my std::variant<> interferes with one of its variations, which is a std::shared_ptr<>. My actual variant has about 12 different possible alternatives, but to keep it short, here I just show a trivial example:
#include <iostream>
#include <string>
#include <vector>
#include <variant>
#include <memory>
using VecInt = std::vector<int>;
using VecStr = std::vector<std::string>;
using VarFoo = std::variant<std::monostate,int>;
using VarBar = std::variant<std::monostate,VecStr,VecInt>;
using PtrInt = std::shared_ptr<int>;
using VarBaz = std::variant<std::monostate,PtrInt>;
int main()
{
VecInt v1{1,2,3};
VecInt v2{1,2,3};
std::cout << "v1 == v2 ? " << std::boolalpha << (v1 == v2) << std::endl;
VecStr v3{"Hello","World"};
VecStr v4{"Hello","World"};
std::cout << "v3 == v4 ? " << std::boolalpha << (v3 == v4) << std::endl;
VarFoo foo1{42};
VarFoo foo2{42};
std::cout << "foo1 == foo2 ? " << std::boolalpha << (foo1 == foo2) << std::endl;
VarBar bar1{VecInt{42,43}};
VarBar bar2{VecInt{42,43}};
std::cout << "bar1 == bar2 ? " << std::boolalpha << (bar1 == bar2) << std::endl;
VarBaz baz1{std::make_shared<int>(42)};
VarBaz baz2{std::make_shared<int>(42)};
std::cout << "baz1 == baz2 ? " << std::boolalpha << (baz1 == baz2) << std::endl;
return 0;
}
Which produces the output:
v1 == v2 ? true
v3 == v4 ? true
foo1 == foo2 ? true
bar1 == bar2 ? true
baz1 == baz2 ? false
The problem being, that the semantics of std::shared_ptr<> is to simply compare the pointer, but not go beyond that to compare the values pointed to, while the rest (std::vector<>, std::variant<>,...) actually compare values.
Of course, if you just use a single pointer, it might be reasonable to assume, that operator==() should only compare the pointer values. But in the context of a variant, this makes no real sense.
Now, with some fictive variant, containing shared pointers, one would have to write ones own override of operator==(), but with many variations in it, this becomes cumbersome quickly. The shortcut being to compare the index() of the shared pointer entries and give them special handling, while resorting for the other possibilities to the existing operator==. But there is the catch:
// the standard, e.g. see https://en.cppreference.com/w/cpp/utility/variant/operator_cmp
template< class... Types >
constexpr bool operator==( const std::variant<Types...>& v,
const std::variant<Types...>& w );
The signature of a self written operator== for my variant will have the same function signature as the one automatically instantiated from the template in the standard. So, we either get linker errors (and factually cannot do a custom override for variants) or we have some infinite recursions in code like this:
#include <variant>
#include <memory>
using DemoVar = std::variant<std::monostate,int,std::shared_ptr<int>>;
constexpr bool operator==(const DemoVar& v, const DemoVar& w) {
if (v.index() == w.index()) {
switch (v.index()) {
case 2:
// custom shared-pointer value comparison
// ...
return <whatever_it_takes>;
default:
return v == w; // bad idea! infinite recursion! Conundrum! No way to differ between
// our custom and the automatically instantiated operator==!
}
}
return false;
}
So, any idea, how to achieve a custom operator== for a variant, without having to type out all the cases? Or should I better not use operator== at all and give the compare function another name entirely?
|
std::shared_ptr<T>::operator== compares the pointers. Thats what it should do (irrespective of being in a std::variant or not). If you want to compare the pointees, then you do not want std::shared_ptr, at least not a naked one. You can use a custom class:
struct my_shared_int {
std::shared_ptr<int> value;
// constructor / copy / move / assignment as desired
bool operator==(const my_shared_int& other) {
// compare the pointees
// (after checking that values arent empty)
}
};
The generated std::variant<std::monostate,my_shared_int>::operator== will then compare the integers not the pointers.
|
73,558,440
| 73,558,480
|
How does an std::string field in a struct gets written to an std::fstream?
|
Here is an example struct:
struct Person
{
std::string Name;
int Age;
};
And Here is how i write it to an fstream:
Person p;
p.Name = "Mike";
p.Age = 21;
stream.write((char*)&p, sizeof(p));
As you can see above I write my Person variable to an fstream using write() function. Person's name is written to the stream as "Mike" but when i use it with a const char* it just writes the address to the string. What i do not understand is this: How does fstream write std::string's value but not the pointer to the string itself?
|
There is no special magic here. Try a larger string (say a few lines of lorem ipsum), and you will find the same thing happens as const char* (with a few extra data points like size and capacity).
This behavior comes from the Small String Optimization. For short strings, the characters are actually stored inside the std::string storage itself instead of keeping a pointer to a char buffer storage. Once the string gets large enough, this inline buffer is replaced with a pointer to a larger buffer somewhere else in memory.
What are the mechanics of short string optimization in libc++?
In terms of your code, I would make a special routine for your Person class to write the contents to a binary buffer. First it would write the age, then it would write the string via std::string's c_str() and size() member functions.
|
73,558,789
| 73,570,836
|
Unable to deduce template arguments when using parameter pack in a member function argument
|
Why does compiler fail to deduce arguments types in this case ?
class MyArrayOfObjects{
//assume this class contains a buffer of ObjectInArray
template<typename ObjectType, typename ... ParamsT>
inline void foreach(ObjectType* obj,void(ObjectType::* func)(ObjectInArray *, ParamsT&&... ), ParamsT&&... params) const {
for (unsigned int i = 0; i < this->internalSize; i++) {
(obj->*func)(this->buffer[i], params...);
}
}
}
Whereas it succeed to deduce them in the error message :
void MyArrayOfObjects::foreach(ObjectType *,void (__cdecl ObjectType::* )(ObjectInArray *,ParamsT &&...),ParamsT &&...) const': could not deduce template argument for 'void (__cdecl ObjectType::* )(ObjectInArray *,ParamsT &&...)' from 'void (__cdecl MyOtherObjectType::* )(ObjectInArray *,int,const char *)
I try to use it this way :
/*
myArray is a MyArrayOfObjects
myOtherObject is a MyOtherObjectType that contains a public function :
void MyOtherObjectType::callThatInLoop(ObjectInArray*,int,const char *)
*/
myArray.foreach(myOtherObject, &MyOtherObjectType::callThatInLoop, 10, "hello world");
Why can't it understand that ParamsT should simply be "int,const char *" ?
|
The parameter is:
void(ObjectType::* func)(ObjectInArray *, ParamsT&&... )
That accepts a member function pointer which takes one pointer argument followed by arbitrary rvalue reference arguments.
The call is:
myArray.foreach(myOtherObject, &MyOtherObjectType::callThatInLoop, 10, "hello world");
That is trying to pass a member function pointer with one pointer argument and a couple arguments passed by value.
Because these are incompatible, deduction fails. A simpler example:
void test(int, const char*);
template <typename... Params>
void foreach1(void (*func)(Params&&...));
void call_foreach1() {
// foreach1(test); // error: deduction fails
}
template <typename... Params>
void foreach2(void (*func)(Params...));
void call_foreach2() {
foreach2(test); // Params deduced as int, const char*
}
See https://godbolt.org/z/qK9rqnPKc
In the example, the focus on the exact form of member function pointer seems misplaced. What matters is that you can call it with the correct parameters, is it not? That's easier to express:
template <typename Object, typename Callable, typename ... Params>
requires std::invocable<Callable, Object*, ObjectInArray*, const Params&...>
void foreach(Object* object, Callable&& callable, const Params&... params) const {
for (std::size_t i = 0; i < internalSize; i++) {
std::invoke(callable, object, buffer[i], params...);
}
}
|
73,559,359
| 73,559,413
|
OpenGL, render to texture with floating point color without clipping value
|
I am not really sure what the English name for what I am trying to do is, please tell me if you know.
In order to run some physically based lighting calculations. I need to write floating point data to a texture using one OpenGL shader, and read this data again in another OpenGL shader, but the data I want to store may be less than 0 or above 1.
To do this, I set up a render buffer to render to this texture as follows (This is C++):
//Set up the light map we will use for lighting calculation
glGenFramebuffers(1, &light_Framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, light_Framebuffer);
glBlendFunc(GL_SRC_ALPHA, GL_DST_ALPHA);//Needed for light blending (true additive)
glGenTextures(1, &light_texture);
glBindTexture(GL_TEXTURE_2D, light_texture);
//Initialize empty, and at the size of the internal screen
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, w, h, 0, GL_RGBA, GL_FLOAT, 0);
//No interpolation, I want pixelation
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
//Now the light framebuffer renders to the texture we will use to calculate dynamic lighting
glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, light_texture, 0);
GLenum DrawBuffers[1] = { GL_COLOR_ATTACHMENT0 };
glDrawBuffers(1, DrawBuffers);//Color attachment 0 as before
Notice that I use type GL_FLOAT and not GL_UNSIGNED_BYTE, according to this discussion Floating point type texture should not be clipped between 0 and 1.
Now, just to test that this is true, I simply set the color somewhere outside this range in the fragment shader which creates this texture:
#version 400 core
void main()
{
gl_FragColor = vec4(2.0,-2.0,2.0,2.0);
}
After rendering to this texture, I send this texture to the program which should use it like any other texture:
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, light_texture );//This is the texture I rendered too
glUniform1i(surf_lightTex_ID , 1);//This is the ID in the main display program
Again, just to check that this is working I have replaced the fragment shader with one which tests that the colors have been saved.
#version 400 core
uniform sampler2D lightSampler;
void main()
{
color = vec4(0,0,0,1);
if (texture(lightSampler,fragment_pos_uv).r>1.0)
color.r=1;
if (texture(lightSampler,fragment_pos_uv).g<0.0)
color.g=1;
}
If everything worked, everything should turn yellow, but needless to say this only gives me a black screen. So I tried the following:
#version 400 core
uniform sampler2D lightSampler;
void main()
{
color = vec4(0,0,0,1);
if (texture(lightSampler,fragment_pos_uv).r==1.0)
color.r=1;
if (texture(lightSampler,fragment_pos_uv).g==0.0)
color.g=1;
}
And I got
The parts which are green are in shadow in the testing scene, nevermind them; the main point is that all the channels of light_texture get clipped to between 0 and 1, which they should not do. I am not sure if the data is saved correctly and only clipped when I read it, or if the data is clipped to 0 to 1 when saving.
So, my question is, is there some way to read and write to an OpenGL texture, such that the data stored may be above 1 or below 0.
Also, No can not fix the problem by using 32 bit integer per channel and by applying a Sigmoid function before saving and its inverse after reading the data, that would break alpha blending.
|
The type and format arguments glTexImage2D only specify the format of the source image data, but do not affect the internal format of the texture. You must use a specific internal format. e.g.: GL_RGBA32F:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, w, h, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
|
73,559,849
| 73,559,992
|
How to reserve a vector of strings, if string size is variable?
|
I want to add many strings to a vector, and from what I've found, calling reserve() before this is more efficient. For a vector of ints, this makes sense because int is 4 bytes, so calling reserve(10) clearly reserves 40 bytes. I know the number of strings, which is about 60000. Should I call vector.reserve(60000)? How would the compiler know the size of my strings, as it doesn't know if these strings are of length 5 or 500?
|
The compiler doesn't know the size of the strings, it knows the size of std::string object. Now, the size of std::string object does not depend on size of string. That is because - most of the time [1] - std::string will allocate on heap, so the object itself is only a pointer and length.
This also means, when you reserve the vector, you don't yet reserve memory for the strings. This is, however, not always a problem. std::strings come from somewhere: if the strings you receive are the return value of a function (i.e., you have them by value), then the memory is already allocated for the string (in the return value). Thus, e.g. std::swap() can help you speeding up populating the array with the results.
If however you populate it using passing references, then the callee will do the operations that result in alloc. In this case, you'd likely want to loop over the vector and reserve each string:
std::vector<std::string> v;
v.reserve(60000); // expected number of strings
for (auto& s : v) {
s.reserve(500); // expected/max. size of strings
}
[1] It might occur that the specific implementation of std::string actually has a small, fixed-size buffer for sort strings and thus allocates only on heap when the string is longer than that.
|
73,559,895
| 73,560,267
|
Using template type as argument to std::invoke
|
Consider this code:
using namespace std;
struct FS
{
void print() { cout << "p\n"; }
int s;
};
template <typename T, typename F>
void myInvoke(T& t, F f)
{
invoke(f, t) = 10;
}
int main(int, char**)
{
FS fs;
myInvoke(fs, &FS::s);
cout << fs.s << "\n";
}
Is there anyway to avoid the runtime cost of passing the class member pointer to myInvoke?
I can of course do:
template <typename T>
void myInvoke(T& t)
{
invoke(&T::s, t) = 10;
}
// then call with
myInvoke(fs);
What I'd like to do:
template <typename T, typename P, P FS::* f>
void myInvoke(T& t)
{
invoke(f, t) = 30;
}
// call with:
myInvoke<FS, int ,&FS::s>(fs);
But without the typename P. Is there any way to make the call more concise so it can be called with:
myInvoke<FS, &FS::s>(fs);
I know this is a bit arcane, but it'd be really nice to be as concise as possible. Especially when you have widely used library functions.
EDIT:
Link to sandbox above code: https://godbolt.org/z/z4cGdPd3r
|
It seems you are looking for deduced Non-type template parameter (with auto) (C++17):
template <auto F, typename T>
void myInvoke(T& t)
{
invoke(F, t) = 10;
}
with usage:
myInvoke<&FS::s>(fs);
Demo
|
73,560,209
| 73,567,195
|
boost:asio::write writes to socket successfully, but server doesn't see the data
|
I've written a simple code sample that writes some data to the socket towards a simple TCP echo server. The data is written successfully to the socket (writtenBytes > 0), but the server doesn't respond that it has received the data.
The application is run in a Docker devcontainer, and from the development container, I'm communicating with the tcp-server-echo container on the same network.
io_service ioservice;
tcp::socket tcp_socket{ioservice};
void TestTcpConnection() {
boost::asio::ip::tcp::resolver nameResolver{ioservice};
boost::asio::ip::tcp::resolver::query query{"tcp-server-echo", "9000"};
boost::system::error_code ec{};
auto iterator = nameResolver.resolve(query, ec);
if (ec == 0) {
boost::asio::ip::tcp::resolver::iterator end{};
boost::asio::ip::tcp::endpoint endpoint = *iterator;
tcp_socket.connect(endpoint, ec);
if (ec == 0) {
std::string str{"Hello world test"};
while (tcp_socket.is_open()) {
auto writtenBytes =
boost::asio::write(tcp_socket, boost::asio::buffer(str));
if (writtenBytes > 0) {
// this line is executed successfully every time.
// writtenBytes == 13, which equals to str.length()
std::cout << "Bytes written successfully!\n";
}
using namespace std::chrono_literals;
std::this_thread::sleep_for(2000ms);
}
}
}
In this case writtenBytes > 0 is a sign of a successful write to the socket.
The echo server is based on istio/tcp-echo-server:1.2 image. I can ping it from my devcontainer by name or IP address with no issues. Also, when I write a similar code sample but using async functions (async_resolve, async_connect, except for the write operation, which is not async), and a separate thread to run ioservice, the server does see my data and responds appropriately.
Why the server doesn't see my data in case of no-async writes? Thanks in advance.
|
It turned out the issue was with the Docker container that received the message. The image istio/tcp-echo-server:1.2 doesn't write to logs unless you send the data with \n in the end.
|
73,560,455
| 73,560,819
|
How to handle a PostMessageThread message in std::thread?
|
Somewhere in my main thread I am calling PostThreadMessage(). But I don't know how to handle it in the std::thread I have sent it to.
I am trying to handle it in std::thread like this:
while(true) {
if(GetMessage(&msg, NULL, 0, 0)) {
// Doing appropriate stuff after receiving the message.
}
}
And I am sending the message from the main thread like this:
PostThreadMessage(thread.native_handle(), WM_CUSTOM_MESSAGE, 0, 0);
I don't know if I am supposed to receive the message as I did in my thread.
All I want to know is, how to check whether the "worker thread" is receiving the message I am sending it.
|
What std::thread::native_handle() returns is implementation-defined (per [thread.req.native] in the C++ standard). There is no guarantee that it even returns a thread ID that PostThreadMessage() wants.
For instance, MSVC's implementation of std::thread uses CreateThread() internally, where native_handle() returns a Win32 HANDLE. You would have to use GetThreadId() to get the thread ID from that handle.
Other implementations of std::thread might not use CreateThread() at all. For instance, they could use the pthreads library instead, where native_handle() would return a pthread_t handle, which is not compatible with Win32 APIs.
A safer way to approach this issue is to not use native_handle() at all. Call GetCurrentThreadId() inside of your thread function itself, saving the result to a variable that you can then use with PostThreadMessage() when needed, for example:
struct threadInfo
{
DWORD id;
std::condition_variable hasId;
};
void threadFunc(threadInfo &info)
{
info.id = GetCurrentThreadId();
info.hasId.notify_one();
MSG msg;
while (GetMessage(&msg, NULL, 0, 0)) {
// Doing appropriate stuff after receiving the message.
}
}
...
threadInfo info;
std::thread t(threadFunc, std::ref(info));
info.hasId.wait();
...
PostThreadMessage(info.id, WM_CUSTOM_MESSAGE, 0, 0);
|
73,561,029
| 73,561,541
|
Passing std::move(obj) as an argument to a call to obj's own method
|
Context: I have a singly-chained list of Nodes, and calling PruneRecursively on its root should prune certain nodes from it, preserving the rest of the chain. Each node knows privately whether it should stay or go.
The implementation I came up with looks as follows:
unique_ptr<Node> Node::PruneRecursively(unique_ptr<Node> self)
{
if (m_ChildNode)
m_ChildNode = m_ChildNode->PruneRecursively(std::move(m_ChildNode));
if (ShouldRmThis())
return std::move(m_ChildNode);
else
return self;
}
if (m_RootNode)
m_RootNode = m_RootNode->PruneRecursively(std::move(m_RootNode));
Look at the very last line. It looks suspect to me, because m_RootNode is move-d before a function is called on it. It works, but I think could be bad if the method reset()-ed the pointer, basically delete-ing this, before accessing a member.
Is it fine (if so, why), or is it a catastrophe waiting to happen?
EDIT: Node::PruneRecursively would look better if it was static, but I need virtual dispatching (Node is polymorphic).
|
Since C++17 it is guaranteed that the postfix expression in a function call is evaluated before any of the expressions in the argument list. That means operator-> on m_RootNode is guaranteed to be evaluated before the constructor of self is called. Therefore there is technically no problem.
Before C++17 there was no guarantee for this order and the constructor for self could be called before evaluation of the operator->. They were indeterminately sequenced. That would automatically cause undefined behavior because there would be a path in which operator-> is guaranteed to be called on an empty unique_ptr.
I would however advice against relying on these new C++17 guarantees. It doesn't cost you anything really to write out e.g. a reference variable to *m_RootNode first and then call the function on it.
Also, with regards to the overall design, not just the evaluation of that expression, see the comments under the question.
|
73,561,144
| 73,561,490
|
log4cplus ConsoleAppender does not work in container
|
I'm trying to containerize my application but I'm having trouble getting log4cplus working. Bottom line up front, logging works when running on the host, but not in the container when I'm logging from long running loops. Its seems like a buffer somewhere is not getting flushed. A minimal example follows.
Additionally, removing the long lived loop removes the issue, presumably log4cplus flushes one last time before tearing down. Lengthening the sleep duration did not seem to help.
main.cpp
#include <iostream>
#include <unistd.h>
#include <log4cplus/logger.h>
#include <log4cplus/loggingmacros.h>
#include <log4cplus/configurator.h>
#include <log4cplus/initializer.h>
int main(int argc, char **argv)
{
std::cout << "Sanity Check" << std::endl;
auto log4cplus = ::log4cplus::Initializer();
std::string logconfig("log4cplus_configure.ini");
::log4cplus::PropertyConfigurator::doConfigure(logconfig);
auto logger = ::log4cplus::Logger::getInstance(LOG4CPLUS_TEXT("main"));
while (true) {
LOG4CPLUS_ERROR(logger, "Sleeping...");
// std::cout << "cout sleep..." << std::endl; // uncomment to get log messages working
sleep(1);
}
return 0;
}
log4cplus_configure.ini
log4cplus.rootLogger=INFO, MyConsoleAppender
log4cplus.appender.MyConsoleAppender=log4cplus::ConsoleAppender
log4cplus.appender.MyConsoleAppender.layout=log4cplus::PatternLayout
log4cplus.appender.MyConsoleAppender.layout.ConversionPattern=[%-5p][%d] %m%n
Dockerfile
FROM rockylinux:latest
RUN dnf install -y boost-system
COPY ./build/sandbox /
COPY ./log4cplus_configure.ini /
CMD ["/sandbox"]
CMakeLists.txt
cmake_minimum_required(VERSION 3.10)
set (CMAKE_CXX_STANDARD 17)
# Project executable and library
add_executable(sandbox main.cpp)
target_link_libraries(sandbox
PUBLIC liblog4cplus.a
PUBLIC pthread
PUBLIC boost_system
)
|
Not sure why, but adding log4cplus.appender.MyConsoleAppender.ImmediateFlush=true to log4cplus_configure.ini solved my issue.
|
73,561,571
| 73,561,689
|
How to capture a function in C++ lambda
|
I'm wondering how I can pass a function in the capture list. My code snippet is shown below. It aborts with error: capture of non-variable ‘bool isVowel(char)’.
Why does this similar posted solution with a function pointer work?
#include <iostream>
#include <algorithm>
#include <set>
#include <list>
using namespace std;
bool isVowel(char c){
string myVowels{"aeiouäöü"};
set<char> vowels(myVowels.begin(), myVowels.end());
return (vowels.find(c) != vowels.end());
}
int main(){
list<char> myCha{'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j'};
int cha[]= {'A', 'B', 'C'};
// it doesn't work
auto iter = adjacent_find(myCha.begin(), myCha.end(),
[isVowel](char a, char b){ return isVowel(a) == isVowel(b);
});
if (iter != myCha.end()) cout << *iter;
}
|
Why does this similar snippet work?
Because in that snippet:
auto func(void(*func2)())
{
return [&func2](){cout<<"hello world 1"<<endl;func2();};
}
func2 is a local variable with type pointer to a function. In your case you are trying to capture a function itself which is not allowed:
The identifier in any capture without an initializer (other than the
this-capture) is looked up using usual unqualified name lookup in the
reaching scope of the lambda. The result of the lookup must be a
variable with automatic storage duration declared in the reaching
scope, or a structured binding whose corresponding variable satisfies
such requirements (since C++20). The entity is explicitly captured.
from here (emphasis is mine). Function name is definitely not a variable with automatic storage duration.
Not clear why are you trying to capture this function, as it would work without any capture and it does not make any sense (unlike question you point to, where OP wanted to use different function in different case hence he/she captured the pointer).
PS according to your comment you did not understand main reason of error here. Consider these 2 snippets:
int i = 0;
int main() {
auto f = [i]() {}; // illegal, should fail to compile
}
vs this:
int main() {
int i = 0;
auto f = [i]() {}; // should compile fine
}
First snippet is incorrect (though gcc may compile it with warning), second one is just fine.
|
73,561,670
| 73,561,737
|
Best practice: [[maybe_unused]] or anonymous argument?
|
Let's say I have a class with a virtual function with one argument and two different implementations of this virtual function. The first implementation uses the argument while the second does not. The second case will produce a compilation warning. There are two ways I can think of to suppress the warning.
Using an anonymous parameter.
Using the [[maybe_unused]] annotation.
What is considered "best practice" between the two?
Thank you in advance for your answers.
|
The most authoritative source we have for "best practice" in C++ is the C++ Core Guidelines. And on the topic of unused arguments, they have this to say
F.9: Unused parameters should be unnamed
...
If parameters are conditionally unused, declare them with the [[maybe_unused]] attribute.
So the C++ Core Guidelines recommend leaving the parameter unnamed if it's never used at all. [[maybe_unused]], by this rule, should only be applied if the parameter is unused in some situations, such as inside an #ifdef or (more modern) inside an if constexpr.
|
73,561,731
| 73,561,776
|
How to check if a number is in a vector, and if not, then to return -1?
|
This is pretty easy if I can import <algorithm> or any other things at the top, but I just started using C++ and my professor asked us to use only loops and conditionals for this problem.
The problem itself is to give a vector v and an int seek to a function. A loop runs through the vector to find the index of the first location of the number. If it can be found, then return the index of where the number is, if not then return -1.
I got the first part down, but now I am having an issue with the second part.
#include <iostream>
#include "vectorutils.h"
#include <vector>
using namespace std;
//endl = new line
int main() {
cout << find({1, 2, 3, 4, 5, 6, 7, 8}, 28) << endl;
}
int find(const std::vector<int>& v, int seek) {
int ind = 1;
for (int i = 0; i < v.size(); i++) {
if (seek == v[i]) {
return i;
}
else {
return -1;
}
}
|
You have the return -1; statement in the wrong place. It needs to be after the loop exits, not inside the loop. For example:
int find(const std::vector<int>& v, int seek) {
for (int i = 0; i < v.size(); i++) {
if (seek == v[i]) {
return i;
}
// else {
// return -1;
// }
}
return -1; // <-- moved down here!
}
In your original code (if it could compile, which it doesn't), the logic is wrong. If the vector is empty then you return nothing at all (which is undefined behavior), and if the 1st element does not match the number then you return -1 immediately without checking the remaining elements at all.
The above change fixes both of those issues.
|
73,563,545
| 73,563,628
|
Number of class template instantiations
|
I was wondering whether you should worry about the number of class template instantiations and their effect on compile times. In the below example, I imagined that the Foo template would only be instantiated only once, while the Bar template would be instantiated twice. Do such differences matter to STL writers?
template<unsigned SIZE>
struct Foo
{
const unsigned size_;
Foo() : size_(SIZE) {}
};
Foo<1> foo1;
Foo<2> foo2;
template<unsigned SIZE>
struct Bar
{
static constexpr unsigned size_{SIZE};
};
Bar<1> bar1;
Bar<2> bar2;
I'm feeling like I'm wrong about the Foo template being instantiated only once. If so could you tell me whether
Bar<1> bar1;
Bar<1> bar2
...
Bar<1> bar10000;
would instantiate the Bar template once or 10,000 times.
Class templates instantiate types, types instantiate objects, does reducing the number for instantiated types matter if the number of objects instantiated remain the same?
Are such things even relevant to compile times (I doubt they would be for my purposes)
|
So the simple answer is that the generation of template code (and subsequent compilation) will occur for each type you are creating. So in this example:
Bar<1> bar1;
Bar<1> bar2
...
Bar<1> bar10000;
Each of these are of type Bar<1> so you only need to generate the code for Bar<1> once. You can see this in the cpp insights tool.
This will change however, if you use this across your code base. Remember that each translation unit (somewhat analogous to cpp file) that includes this header will redefine the types Bar and Foo for any number they are used with. If this is the case, you may find your code being generated and compiled a lot. For this simple template that is not a concern, but for much more complex templates, that may depend on other templates, this can quickly add up.
|
73,563,894
| 73,579,169
|
Run virtual environment using VMX in C++
|
I was curious if it's possible to start a virtual environment that can do something like this:
int main() {
std::cout << "This part is being ran on the host";
StartVM();
std::cout << "This part is being ran on a VM";
EndVM();
return 0;
}
I've read on some documentation on Intel-VT's VMX operations, but it's a level too far above my understanding of the intricacies of x86 hardware and x86 assembly.
I've tried to run the code in the OSDev wiki but i'm having trouble trying to manually enable the corresponding bits in the article using inline assembly to make the vmxon instruction work.
If there's a library or an API in C++ (or even in C) that does this, please let me know. Also, if I'm deeply misunderstanding something about VMX or virtualisation in general, then please point me to the right direction.
|
A research paper called dune implements what you describe.
Paper: https://www.usenix.org/conference/osdi12/technical-sessions/presentation/belay
Open Source Code: https://github.com/project-dune/dune
It achieves so with a kernel module that handles VM entries/exits and syscall forwarding. It also applies a bunch of (clever!) tricks that make the virtual memory space identical inside and outside the VM so the program can continue to execute after entering the VM.
Google's gVisor(https://gvisor.dev/) project may also help, and developers of gVisor are seeking to add the ability to transparently forward syscalls from VM in the kernel (https://lwn.net/Articles/902585/).
|
73,564,242
| 73,564,835
|
Shall I try to use const T & as much as possible?
|
I am learning books <Effective C++>, i think use const reference is a good practice. because it can avoid unnecessary copy.
so, even in initialize a object, i use const T & t = T();
here is the code:
#include <string>
#include <iostream>
#include <vector>
using namespace std;
template <class T>
inline std::vector<std::string> Split(const std::string &str, const T &delim, const bool trim_empty = false) {
if (str.empty()) return {};
size_t pos, last_pos = 0, len;
std::vector<std::string> tokens;
std::string delim_s = "";
delim_s += delim;
while(true) {
pos = str.find(delim_s, last_pos);
if (pos == std::string::npos) pos = str.size();
len = pos-last_pos;
if ( !trim_empty || len != 0) tokens.push_back(str.substr(last_pos, len));
if (pos == str.size()) break;
else last_pos = pos + delim_s.size();
}
return tokens;
}
int main() {
const std::string& str = "myname@is@haha@"; // compare with std::string = "", will this be better(in speed and memory)?
const std::string& s = Split(str, "@").front(); // this crashed
// std::string s = Split(str, "@").front(); // this ok
cout << "result is " << s << endl;
}
As you see above, this code is used to split a string into vector<std::string>,
in main function:
I have two method to get the first element of the splited results.
1.const std::string& s = Split(str, "@").front();
2.std::string s = Split(str, "@").front();
in my test, option 1 will crashed, option2 is ok.
could you talk some difference of these?
and is it necessary to do this (const std::string& s = "asd";)?
comparing to std::string s = "asd", what is the advantages and disadvantages of them?
|
const std::string& str = "myname@is@haha@";
This works due to reference lifetime extension. When a prvalue is bound immediately to a reference-to-const or an rvalue reference its lifetime gets extended to the lifetime of the reference. Since "myname@is@haha@" is not a std::string, a temporary std::string gets constructed to hold that value. That temporary is a prvalue, and so is eligible for lifetime extension.
const std::string& s = Split(str, "@").front();
This doesn't work because std::vector::front doesn't return a prvalue, it returns an lvalue. Reference lifetime extension does not apply in this case. The object referenced by the reference returned by front is part of the std::vector returned by Split, and dies along with it at the end of the expression leaving s dangling.
This part is wandering a bit into opinion, but in general if you want an object, just declare that you have an object. There are far fewer pitfalls, and given modern compiler optimizations there is no performance benefit to relying on reference lifetime extension. Anywhere you may have saved a copy by relying on reference lifetime extension, copy elision (guaranteed in C++17 and above, widely applied by compilers prior to that) will save the copy for you.
|
73,564,321
| 73,564,382
|
Resetting unique_ptr to an array of characters
|
I'm porting an old C++ program to modern C++. The legacy program uses new and delete for dynamic memory allocation. I replaced new with std::unique_ptr, but I'm getting compilation error when I try to reset the unique_ptr.
Here is the striped down version of the program. My aim is to get rid of all the naked new.
#include <memory>
enum class Types {
ONE,
TWO,
};
// based on type get buffer length
int get_buffer_len(Types type) {
if(type == Types::ONE) return 10;
else if(type == Types::TWO) return 20;
else return 0;
}
int main() {
Types type = Types::ONE;
std::unique_ptr<char[]> msg{};
auto len = get_buffer_len(type);
if(len > 0) {
msg.reset(std::make_unique<char[]>(len));
}
// based on type get the actual message
if(type == Types::ONE) {
get_message(msg.get());
}
}
I get the following compilation error:
error: no matching function for call to 'std::unique_ptr<char []>::reset(std::__detail::__unique_ptr_array_t<char []>)'
| msg.reset(std::make_unique<char[]>(len));
| ~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
If you look at the reset function, it takes a ptr to memory, not another unique ptr:
// members of the specialization unique_ptr<T[]>
template< class U >
void reset( U ptr ) noexcept;
This function is designed to allow you to reset a unique pointer and simultaneously capture memory that you intend to manage with said unique_ptr. What you are looking to do is assign an r-value unique_ptr to ann existing unique_ptr (msg), for which c++ also has an answer:
unique_ptr& operator=( unique_ptr&& r ) noexcept;
Move assignment operator. Transfers ownership from r to *this as if by calling reset(r.release()) followed by an assignment of get_deleter() from std::forward(r.get_deleter()).
So you can instead just do:
msg = std::make_unique<char[]>(len);
|
73,564,414
| 73,564,480
|
Time complexity in c++. loops
|
I have been learning DSA, but calculating time complexity is a little difficult
I understand the logic of O(n^2) or O(n)
but what will be the time complexity of and how?:
while(n%2 == 0) {
n/=2;
}
It will be O(n/2)? not sure
|
You can test it practically to see how much iterations loop will have depending of n
int main()
{
int n = 2048;
int count = 0;
while(n%2 == 0) {
n/=2;
++count;
}
std::cout << count;
return 0;
}
So, when n = 1024, count = 10
when n = 2048, count = 11
2^(iteration count) = N;
So, complexity would be O(Log 2 (N))
|
73,564,472
| 73,571,384
|
how to cast one type to another while preserving value category and cv-qualifiers?
|
As the title says. C++ already has std::forward, but it can't cast the original type to another. I wonder if there is a function to cast one type to another while preserving the value category and cv-qualifers of the original type.
Supposing there is such a function called perfect_cast to do the magic, the output of the following code
#include <iostream>
#include <type_traits>
#include <format>
template <typename To, typename From>
decltype(auto) perfect_cast(From&& s)
{
// ???
}
template <std::size_t I>
struct A
{
void f()&
{
std::cout << std::format("lvalue A{:d}", I) << std::endl;
}
void f() const&
{
std::cout << std::format("const lvalue A{:d}", I) << std::endl;
}
void f()&&
{
std::cout << std::format("rvalue A{:d}", I) << std::endl;
}
void f() const&&
{
std::cout << std::format("const rvalue A{:d}", I) << std::endl;
}
};
template <std::size_t ...Is>
struct B : A<Is>... {};
auto g1()
{
return B<0, 1, 2, 3>{};
}
const auto g2()
{
return B<0, 1, 2, 3>{};
}
int main()
{
B<0, 1, 2, 3> b;
auto& b1 = b;
perfect_cast<A<0>>(b1).f();
const auto& b2 = b1;
perfect_cast<A<1>>(b2).f();
perfect_cast<A<2>>(g1()).f();
perfect_cast<A<3>>(g2()).f();
}
should be
lvalue A0
const lvalue A1
rvalue A2
const rvalue A3
|
There aren't a lot of use cases for a combined conversion-forward. Arguably, most casts result in prvalues, and for derived-to-base casts, the language already permits treating the derived as base, preserving value category.
For example, the OP main would be more clearly written as:
B<0, 1, 2, 3> b;
auto& b1 = b;
b1.A<0>::f();
const auto& b2 = b1;
b2.A<1>::f();
g1().A<2>::f();
g2().A<3>::f();
See https://godbolt.org/z/Kff9zKPha
Perhaps it is useful to know there is a seemingly related problem of forwarding an object x in the manner of some other object y. This is useful (say, if x is "part" of y). That is useful enough that it was standardized as std::forward_like. (It is more complex than it should have to be, because of some past poor language design decisions.)
|
73,565,363
| 73,572,122
|
Getting tbb linker errors when including the <execution> header
|
I have been writing a visualisation tool using OpenGL. It has been compiling and linking just fine (using gcc 11.2.0) until I recently installed the Linux dependencies for OpenVDB (listed under Using UNIX apt-get). I am now getting linker errors that I have narrowed down to the inclusion of the <execution> header file:
/usr/bin/ld: CMakeFiles/test.dir/main.cpp.o: in function `tbb::detail::d1::execution_slot(tbb::detail::d1::execution_data const&)':
main.cpp:(.text._ZN3tbb6detail2d114execution_slotERKNS1_14execution_dataE[_ZN3tbb6detail2d114execution_slotERKNS1_14execution_dataE]+0x18): undefined reference to `tbb::detail::r1::execution_slot(tbb::detail::d1::execution_data const*)'
/usr/bin/ld: CMakeFiles/test.dir/main.cpp.o: in function `tbb::detail::d1::current_thread_index()':
main.cpp:(.text._ZN3tbb6detail2d120current_thread_indexEv[_ZN3tbb6detail2d120current_thread_indexEv]+0x12): undefined reference to `tbb::detail::r1::execution_slot(tbb::detail::d1::execution_data const*)'
The above is from the following minimal example.
main.cpp:
#include <execution>
#include <iostream>
int main()
{
std::cout << "Hello, World!" << std::endl;
return 0;
}
CMakeLists.txt:
cmake_minimum_required(VERSION 3.22)
project(test)
set(CXX_STANDARD_REQUIRED ON)
set(CMAKE_CXX_STANDARD 20)
add_executable(test main.cpp)
I have tried linking TBB with my project like so:
# Find and link TBB.
find_package(TBB REQUIRED)
include_directories(${TBB_INCLUDE_DIRS})
link_directories(${TBB_LIBRARY_DIRS})
add_definitions(${TBB_DEFINITIONS})
if(NOT TBB_FOUND)
message("Error: TBB not found")
endif(NOT TBB_FOUND)
add_executable(test main.cpp)
target_link_libraries(test ${TBB_LIBRARIES})
...and also adding the -ltbb linker flag (as per the answer to this post) using
set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -ltbb")
However, this does not solve the issue.
What I find particularly strange is that I didn't use to have to link against tbb despite having included the <excution> header all along. Only after installing the OpenVDB dependencies has this become an issue.
Can anyone advise me on how to solve this (either by appropriately linking tbb, or not having to link it at all, as was the case before)? Any form of insight would be much appreciated.
|
I have managed to successfully link tbb as follows:
find_package(TBB REQUIRED COMPONENTS tbb)
add_executable(test main.cpp)
target_link_libraries(test tbb)
Although I am still clueless as to why I suddenly needed to link with tbb, considering I previously didn't need to.
|
73,565,441
| 73,565,787
|
C++ | Input the following string inside a rectangle
|
I would like to input the following code inside a rectangle, so that the output would be the rectangle with my printed text.
Here is my code:
#include <iostream>
using namespace std;
int main() {
cout <<" * Programming Assignment *" << endl;
cout <<" * Data Structures *" << endl;
cout <<" * Author: Hello World * " << endl;
cout <<" * Due Date: September, 7th * " << endl;
return 0;
}
Here is the text OUTPUT I want to be in the Rectangle:
* Programming Assignment *
* Data Structures *
* Author: Hello World *
* Due Date: September, 7th *
Do you have any tips on how to do this, I've already checked the tutorial bellow, but that did not help clarify my answer:
Print out a text in the middle of a rectangle
|
You can use #include <iomanip> header, which includes convenient objects which you can pass into cout, to manipulate the output: std::setw(width) to specify that the next thing you print (text in our case) will have to be padded to have exactly width characters, std::setfill(' ') to specify with which character you pad the text, and std::left to tell to align the text to the left, and pad with spaces to the right.
#include <iostream>
#include <iomanip>
using namespace std;
void print_box_border(int width, bool is_top) {
if (is_top) { cout << "┌"; }
else { cout << "└"; }
for (int i = 0; i < width; ++i) {
cout << "─";
}
if (is_top) { cout << "┐\n"; }
else { cout << "┘\n"; }
}
void print_box_row(int width, const char * text) {
cout << "│" << std::setfill(' ') << std::setw(width) << std::left << text << "│\n";
}
int main()
{
int box_width = 50;
print_box_border(box_width, /*is_top*/ true);
print_box_row(box_width, " * Programming Assignment *");
print_box_row(box_width, " * Data Structures *");
print_box_row(box_width, " * Author: Hello World * ");
print_box_row(box_width, " * Due Date: September, 7th * ");
print_box_border(box_width, /*is_top*/ false);
return 0;
}
Output I get:
┌──────────────────────────────────────────────────┐
│ * Programming Assignment * │
│ * Data Structures * │
│ * Author: Hello World * │
│ * Due Date: September, 7th * │
└──────────────────────────────────────────────────┘
|
73,565,975
| 73,566,036
|
no known conversion from 'std::shared_ptr<int>' to 'int *' for 1st argument
|
when operating smart pointers, something confused me.
Hers is the error message
no known conversion from 'std::shared_ptr<int>' to 'int *' for 1st argument
and here's the code I ran
#include <memory>
#include <vector>
class test {
public:
int x_;
test(int *x) {}
};
int main(int argc, char *argv[]) {
auto x = std::make_shared<int>(5);
std::shared_ptr<test> t = std::make_shared<test>(x);
}
I think this error came from the different types of pointers.
And compilation can succeed when changing
std::shared_ptr<test> t = std::make_shared<test>(x);
into
std::shared_ptr<test> t = std::make_shared<test>(&(*x));
But this operation &(*), in my opinion, looks weird. I'm not sure that is a common usage when people program.
Is there any suggestion for this question?
Thanks.
|
Use x.get().
It's not implicitly convertible because that would make it too easy to make mistakes, like creating another shared_ptr from the same raw pointer without proper sharing.
|
73,566,127
| 73,569,488
|
How do applications determine if instruction set is available and use it in case it is?
|
Just interesting how it works in games and other software.
More precisely, I'm asking for a solution in C++.
Something like:
if AMX available -> Use AMX version of the math library
else if AVX-512 available -> Use AVX-512 version of the math library
else if AVX-256 available -> Use AVX-256 version of the math library
etc.
The basic idea I have is to compile the library in different DLLs and swap them on runtime but it seems not to be the best solution for me.
|
For the detection part
See Are the xgetbv and CPUID checks sufficient to guarantee AVX2 support? which shows how to detect CPU and OS support for new extensions: cpuid and xgetbv, respectively.
ISA extensions that add new/wider registers that need to be saved/restored on context switch also need to be supported and enabled by the OS, not just the CPU. New instructions like AVX-512 will still fault on a CPU that supports them if the OS hasn't set a control-register bit. (Effectively promising that it knows about them and will save/restore them.) Intel designed things so the failure mode is faulting, not silent corruption of registers on CPU migration, or context switch between two programs using the extension.
Extensions that added new or wider registers are AVX, AVX-512F, and AMX. OSes need to know about them. (AMX is very new, and adds a large amount of state: 8 tile registers T0-T7 of 1KiB each. Apparently OSes need to know about AMX for power-management to work properly.)
OSes don't need to know about AVX2/FMA3 (still YMM0-15), or any of the various AVX-512 extensions which still use k0-k7 and ZMM0-31.
There's no OS-independent way to detect OS support of SSE, but fortunately it's old enough that these days you don't have to. It and SSE2 are baseline for x86-64. Everything up to SSE4.2 uses the same register state (XMM0-15) so OS support for SSE1 is sufficient for user-space to use SSE4.2. SSE1 was new in 1999, with Pentium 3.
Different compilers have different ways of doing CPUID and xgetbv detection. See does gcc's __builtin_cpu_supports check for OS support? - unfortunately no, only CPUID, at least when that was asked. I'd consider that a GCC bug, but IDK if it ever got reported or fixed.
For the optional-use part
Typically setting function pointers to selected versions of some important functions. Inlining through function pointers isn't generally possible, so make sure you choose the boundaries appropriately, like an AVX-512 version of a function that includes a loop, not just a single vector.
GCC's function multi-versioning can automate that for you, transparently compiling multiple versions and hooking some function-pointer setup.
There have been some previous Q&As about this with different compilers, search for "CPU dispatch avx" or something like that, along with other search terms.
See The Effect of Architecture When Using SSE / AVX Intrinisics to understand the difference between GCC/clang's model for intrinsics where you have to enable -march=skylake or whatever, or manually -mavx2, before you can use an intrinsic. vs. MSVC and classic ICC where you could use any intrinsic anywhere, even to emit instructions the compiler wouldn't be able to auto-vectorize with. (Those compilers can't or don't optimize intrinsics much at all, perhaps because that could lead to them getting hoisted out of if(cpu) statements.)
|
73,566,210
| 73,566,251
|
Move semantics and std::move
|
I have a general question about move semantics. Yesterday I just played around to get more comfortable with this topic. Here I added copy and move constructor operators, that simply log to the console:
#include <iostream>
class Test {
public:
const char* value = "HI";
Test(){
std::cout << "Default Constructed\n";
}
Test(Test &&source) {
std::cout << "Move Constructed\n";
}
Test(const Test &source) {
std::cout << "Copy Constructed\n";
}
};
Now when I call
void showMe(Test&& test) {
std::cout << test.value;
}
int main() {
Test test1;
// Test test2{test1};
showMe(std::move(test1));
return 0;
}
The console logs out:
> Default Constructed
> HI
Which is clear to me, since we are just moving the ownership to showMe. But why wasn't the Test's move constructor operator be called?
But now, if I change showMe function into
void showMe(Test test) {
std::cout << test.value;
}
the console shows:
> Default Constructed
> Move Constructed
> HI
So my question is, why was the Move constructor operator not executed in the first case, but in the second?
I usually thought in the first case, the Test's move operator should execute.
What are the differences between those two cases mentioned?
|
But why wasn't the Test's move constructor operator be called?
Because you keep operating on the same object via a reference to an rvalue. No new object is being constructed, hence no constructor is necessary.
You could conceivably keep passing that reference down, just like you can do with a regular reference or a reference to const - the very nature of references is that they're not objects.
In your second example, the parameter to the function is a value, and in order to construct that value, the second constructor needs to fire.
|
73,566,320
| 73,567,901
|
boost beast sync app - Is explicit concurrency handling needed?
|
Consider the official boost beast websocket server sync example
Specifically, this part:
for(;;)
{
// This buffer will hold the incoming message
beast::flat_buffer buffer;
// Read a message
ws.read(buffer);
// Echo the message back
ws.text(ws.got_text());
ws.write(buffer.data());
}
To simplify the scenario, let's assume it only ever writes, and the data being written is different each time.
for(;;)
{
// assume some data has been prepared elsewhere, in str
mywrite(str);
}
...
void mywrite(char* str)
{
net::const_buffer b(str, strlen(str));
ws.write(b);
}
This should be fine, as all calls to mywrite happen sequentially.
What if we had multiple threads and the same for loop? i.e. what if we had concurrent calls to mywrite, and to ws.write by extension? Would something like a strand or a mutex be needed?
In other words, do we need to explicitly handle concurrency when calling ws.write from multiple threads?
I've not yet understood the docs, as they mention:
Thread Safety
Distinctobjects:Safe.
Sharedobjects:Unsafe. The application must also ensure that all asynchronous operations are performed within the same implicit or explicit strand.
And then
Alternatively, for a single-threaded or synchronous application you may write:
websocket::stream<tcp_stream> ws(ioc);
This seems to imply ws object is not thread-safe, but also for the specific case of a sync app, there's no explicit strand being constructed, implying it's OK?
I was not able to work it out by reading the example, or the websocket implementation. I've never worked with asio before.
I tried to test it as follows, it didn't seem to fail on my laptop but I don't have the guarantee this will work for other cases. I'm not sure it's a valid test for the case I've described either.
auto lt = [&](unsigned long long i)
{
char s[1000] = {0};
for(;;++i)
{
sprintf(s, "Hello from thread:%llu", i);
mywrite(s,30);
}
};
std::thread(lt, 10000000u).detach();
std::thread(lt, 20000000u).detach();
std::thread(lt, 30000000u).detach();
// ws client init, as the official example
for (int i = 0; i < 100; ++i)
{
beast::flat_buffer buffer;
// Read a message into our buffer
ws.read(buffer);
// The make_printable() function helps print a ConstBufferSequence
std::cout << beast::make_printable(buffer.data()) << std::endl;
}
|
Yes, you need synchronization because you access the object from multiple threads.
The documentation you quoted is very clear on that:
Sharedobjects:Unsafe [...]
On your rationale for being confused:
This seems to imply ws object is not thread-safe, but also for the specific case of a sync app, there's no explicit strand being constructed, implying it's OK?
It's okay because it's single-threaded, not because it's synchronous. In fact, even if single-threading you still need a strand to prevent overlapped asynchronous write operations. That's what the second part hints at:
[...] The application must also ensure that all asynchronous operations are performed within the same implicit or explicit strand.
Now, the missing piece that might solve the puzzle for you is the example has an implicit logical strand (a sequential chain of non-overlapping asynchronous operations). See also Why do I need strand per connection when using boost::asio?
|
73,566,585
| 73,569,548
|
How to programmatically get the name of the current printer through the libcups in Linux?
|
I'm trying to get the name of the current printer using the libcups library in Linux, but I can't find such a method. I found only how to get a complete list of printers, but how to find out which one will print is not clear.
#include <cups/cups.h>
QStringList getPrinters()
{
QStringList printerNames;
cups_dest_t *dests;
int num_dests = cupsGetDests(&dests);
for (int pr = 0; pr < num_dests; ++pr) {
QString printerName = QString::fromUtf8(dests[pr].name);
printerNames.append(printerName);
}
cupsFreeDests(num_dests, dests);
return printerNames;
}
|
Once you have a valid destination (cups_dest_t), one can retrieve the informations via: cupsGetOption
Example (from https://openprinting.github.io/cups/doc/cupspm.html#basic-destination-information):
const char *model = cupsGetOption("printer-make-and-model",
dest->num_options,
dest->options);
To find the default printer one can use:
cupsGetDest (param name: NULL for the default destination)
cupsGetDests2 (param http: Connection to server or CUPS_HTTP_DEFAULT)
Other suggestion would be:
https://openprinting.github.io/cups/doc/cupspm.html#finding-available-destinations
Last but not least:
CUPS Programming Manual
Sidenote:
Since you're using Qt, doesn't Qt have printer support?
E.g.
QPrinter::QPrinter(const QPrinterInfo &printer, QPrinter::PrinterMode mode = ScreenResolution);
(see https://doc.qt.io/qt-6/qprinter.html#QPrinter-1)
and
bool QPrinterInfo::isDefault() const;
(see https://doc.qt.io/qt-6/qprinterinfo.html#isDefault)
|
73,566,710
| 73,566,866
|
How to instantiate a pushbutton from ui into the code in QT framework
|
What I intent to do is just a basic animation in my qt application. For the animation, when the button is clicked, I want it to go from one point to another in a time interval. Here is my code (main.cpp and mainWindow.h are the same as default. ):
MainWindow.cpp
#include "mainwindow.h"
#include "ui_mainwindow.h"
#include <QPropertyAnimation>
MainWindow::MainWindow(QWidget *parent)
: QMainWindow(parent)
, ui(new Ui::MainWindow)
{
ui->setupUi(this);
}
MainWindow::~MainWindow()
{
delete ui;
}
void MainWindow::on_pButton_clicked()
{
QPushButton* pb = ui->pButton;
QPropertyAnimation anim(pb, "geometry");
anim.setStartValue(QRect(100,100,100,30));
anim.setEndValue(QRect(900,900,100,30));
anim.setDuration(10000);
anim.start();
}
Build works with no errors. The problem is, when I click the button, its position properties just transforms to the startvalues but stays there instead of moving to the endpoint. Why it remains stationary?
And, lastly, my other question is how can I get a reference of a widget that I created on ui design. I know one way which I used in my code, getting the button using ui->pButton. But in this way I must create the QPushButton variable as a pointer. Is there any way I can reference a button into code which was added to ui using ui designer (not using code to generate).
|
You are declaring the anim object as local variable. As soon as on_pButton_clicked() function exits, anim variable will get destroyed. Create anim object on the heap, as shown below, then your code should work fine.
void MainWindow::on_pButton_clicked()
{
QPropertyAnimation *anim = new QPropertyAnimation(ui->pButton, "geometry");
anim->setEndValue(QRect(900,900,100,30));
anim->setEndValue(QRect(500,500,100,30));
anim->setDuration(10000);
anim->start(QAbstractAnimation::DeleteWhenStopped);
}
|
73,568,020
| 73,607,202
|
C++ - Declare variable with a variable template list without "auto"
|
I am using c++17 and I need to declare some variables that has the following type structure:
ctre::regex_results<const char*,
ctre::captured_content<1, void>,
ctre::captured_content<2, void> > mts;
ctre::regex_results<const char*,
ctre::captured_content<1, void>,
ctre::captured_content<2, void>,
ctre::captured_content<3, void>,
ctre::captured_content<4, void>,
ctre::captured_content<5, void>,
ctre::captured_content<6, void>,
ctre::captured_content<7, void>,
ctre::captured_content<8, void> > mts;
The number of ctre::captured_content in the template list is determined at compile-time.
I was wondering if there exist a shorter way to declare them WITHOUT using auto. Also, I cannot use template due to this error: a template declaration cannot appear at block scope
The type I need is determined by this function:
template <CTRE_REGEX_INPUT_TYPE input, typename... Modifiers>
static constexpr inline auto search =
regular_expression<typename regex_builder<input>::type,
search_method,
ctll::list<singleline, Modifiers...>>();
UPDATE:
A simple example:
#include <bits/stdc++.h>
#include "ctre.hpp" // single-header of https://github.com/hanickadot/compile-time-regular-expressions
using namespace std;
#define LINE_RGX "((?:a+).(?:b+).(?:c+).(?:d+)\\s*)+"
#define RGX1 "(a+)\\.(b+)\\.(c+)\\.(d+)\\s*"
#define RGX2 "(a+\\.b+)\\.(c+\\.d+)\\s*"
static constexpr auto ctre_line_rgx = ctll::fixed_string{ LINE_RGX };
static constexpr auto ctre_rgx1 = ctll::fixed_string{ RGX1 };
static constexpr auto ctre_rgx2 = ctll::fixed_string{ RGX2 };
constexpr auto match_line(std::string_view sv) noexcept {
return ctre::match<ctre_line_rgx>(sv);
}
constexpr auto search_rgx1(std::string_view sv) noexcept {
return ctre::search<ctre_rgx1>(sv);
}
constexpr auto search_rgx2(std::string_view sv) noexcept {
return ctre::search<ctre_rgx2>(sv);
}
int main() {
string s = "aaa.b.ccccc.dddd a.b.c.d aaaa.bb.c.ddddd";
ctre::regex_results mlp = match_line(s);
ctre::regex_results<const char*, ctre::captured_content<1, void>, ctre::captured_content<2, void>, ctre::captured_content<3, void>, ctre::captured_content<4, void> > mts1;
ctre::regex_results<const char*, ctre::captured_content<1, void>, ctre::captured_content<2, void> > mts2;
string_view _tmp = mlp.get<1>().to_view();
cout << "parsing fields: " << _tmp << endl;
while ( _tmp.size() > 0 && (mts1 = search_rgx1(_tmp)).size() > 0) {
cout << mts1.get<1>().to_view() << " -> " << mts1.get<2>().to_view() << " -> " << mts1.get<3>().to_view() << " -> " << mts1.get<4>().to_view() << endl;
if (mts1.size() >= _tmp.size())
break;
_tmp = _tmp.substr(mts1.size());
}
_tmp = mlp.get<1>().to_view();
cout << "parsing fields: " << _tmp << endl;
while ( _tmp.size() > 0 && (mts2 = search_rgx2(_tmp)).size() > 0) {
cout << mts2.get<1>().to_view() << " -> " << mts2.get<2>().to_view() << endl;
if (mts2.size() >= _tmp.size())
break;
_tmp = _tmp.substr(mts2.size());
}
}
The types of mts1 and mts2 are determined based on the regex provided. Is it possible to avoid to specify all the ctre::captured_content<N, void> with an abbreviation or some c++ trick?
|
It seems you want a typedef to shortened ctre::regex_results<const char*, ctre::captured_content<1, void>, .., ctre::captured_content<N, void>>
You might use decltype:
decltype(search_rgx1("")) mts1;
decltype(search_rgx2("")) mts2;
Alternatively, you might use (global scope):
template <typename Seq> struct regex_results_type_helper;
template <std::size_t... Is>
struct regex_results_type_helper<std::index_sequence<Is...>>
{
using type = ctre::regex_results<const char*, ctre::captured_content<1 + Is, void>...>;
}
template <std::size_t N>
using regex_results_type =
typename regex_results_type_helper<std::make_index_sequence<N>>::type;
So you have, in your main
regex_results_type<4> mts1;
regex_results_type<2> mts2;
You can even have:
regex_results_type<group_count(ctre_rgx1)> mts1;
regex_results_type<group_count(ctre_rgx2)> mts2;
with appropriate constexpr function group_count
|
73,568,343
| 73,568,378
|
VS2022: std::string.c_str() Dangling pointer warning
|
I am getting a dangling pointer warning from VS2022 for this code snippet:
chars = (unsigned char*)(pProgram->getProgramUUID().c_str());
memcpy(&outputBuffer[bufferPosition], chars, numChars);
Warning
Warning C26815 The pointer [chars] is dangling because it points at a temporary instance which was destroyed.
getProgramUUID() returns a std::string and is defined as
std::string getProgramUUID() { return m_programUUID; }
basically this is just a memcpy from a pointer returned by c_str().
Any idea if this is a genuine issue, or this is just VS being pedantic?
Regards,
Abhishek
|
Because getProgramUUID() returns a string by value, that string object will be destructed and cease to exist once the full expression (the assignment) is finished. The pointer to the contained string will become immediately invalid.
As a workaround you could call getProgramUUID() as part of the memcpy call:
memcpy(&outputBuffer[bufferPosition], pProgram->getProgramUUID().c_str(), numChars);
|
73,569,146
| 73,569,841
|
DLL C++ import functions to Delphi
|
How do I access the functions of a C++ DLL in Delphi
#define CCONV _stdcall
typedef struct{
unsigned long BaudRate;
unsigned char PortNumber;
.....
}SSP_COMMAND;
NOMANGLE int CCONV OpenSSPComPort (SSP_COMMAND * cmd);
In documentation:
OpenSSPComPort
Parameters: Pointer to SSP_COMMAND structure
Returns: WORD 0 for fail, 1 for success
I suspect I am wrong about this Pointer.
In Delphi I am attempting this:
type
SSP_COMMAND=class
BaudRate:integer;
PortNumber:integer;
end;
type
TOpenSSPComPort = function (sspc:SSP_COMMAND):Integer;stdcall;
var nv11 : THandle;
OpenSSPComPort:TOpenSSPComPort;
procedure TForm1.FormCreate(Sender: TObject);
begin
if nv11 = 0 then
begin
nv11 := LoadLibrary(pchar('ITLSSPProc.dll'));
@OpenSSPComPort:=GetProcAddress(nv11, 'OpenSSPComPort');
ss_cmd := SSP_COMMAND.create;
end;
end;
procedure TForm1.Button1Click(Sender: TObject);
begin
ss_cmd.BaudRate := 9600;
ss_cmd.PortNumber :=8;
If OpenSSPComPort(ss_cmd)=0 Then
Memo1.lines.add('Error OpenSSPComPort')
else
Memo1.lines.add('OpenSSPComPort - OK');
end;
So result is : COM0 - Can't open this port
but I sent 8
this means then SSP_COMMAND was sent incorrectly
|
According to the C definition from the link you provided, you would need to change the following:
type
SSP_FULL_KEY = packed record
FixedKey : UINT64;
EncryptKey : UINT64;
end;
type
SSP_COMMAND=packed record
key : SSP_FULL_KEY;
BaudRate:integer;
PortNumber:integer;
SSPAddress : byte;
RetryLevel : byte;
EncryptionStatus : byte;
CommandDataLength : byte;
CommandData : Array[0..254] of byte;
ResponseStatus : byte;
ResponseDataLength : byte;
ResponseData : Array[0..254] of byte;
end;
PSSP_COMMAND = ^SSP_COMMAND;
type
TOpenSSPComPort = function (sspc:PSSP_COMMAND):Integer;stdcall;
procedure TForm1.Button1Click(Sender: TObject);
var
ss_cmd : SSP_COMMAND;
begin
FillChar(ss_cmd, sizeof(ss_cmd), 0);
ss_cmd.BaudRate := 9600;
ss_cmd.PortNumber :=8;
If OpenSSPComPort(@ss_cmd)=0 Then
Memo1.lines.add('Error OpenSSPComPort')
else
Memo1.lines.add('OpenSSPComPort - OK');
end;
|
73,569,199
| 73,569,303
|
Switch betweeen method versions
|
I need to rewrite some c++ classes and I would like to be able to switch between the new code and the old code (reliable and fast). What would be good options to do so? In general I need some kind of switch that decides what to do, like:
int foo::bar()
{
if (yesUseNew == true)
{
return 1 + 1;
}
else
{
return 1 + 2;
}
}
HOW and where could i set yesUseNew? It should be useable in Debug-Build and in Release and it should be applied directly. So reading some xml-Config would be to late.
The attempt, with two code-versions directly in the methods, is only an example. At this point I am not sure on which abstraction level I will do this. The primary question is actually HOW I can distinguish between the versions (fast).
Thanks!
|
Well, I see the following alternatives :
At compile time with #define
e.g:
#define V1 //Comment this line if you want V2. You can also define it on the command line of your compiler
void myfunction() {
#ifdef V1
//V1 Implementation
#else
//V2 Impl
#endif
At runtime with env variables for example
This means you can switch without recompiling.
if (std::getenv("V1"))
//V1 Code
else
//V2 Code
Add a configuration option to your app, either in the command line or the config file if it has one. If the dual behaviour will last this is the way to go
|
73,569,323
| 73,569,630
|
Direct initialization with prvalue: Bug in MSVC?
|
Consider the following code:
struct S
{
S(int, double) {}
explicit S(const S&) {}
explicit S(S&&) {}
};
void i_take_an_S(S s) {}
S i_return_an_S() { return S{ 4, 2.0 }; }
int main()
{
i_take_an_S(i_return_an_S());
}
With the '-std=c++17' flag, both g++ and clang++ compile this code just fine. MSVC (with the /std:c++17 flag), however, reports
"error C2664: 'void i_take_an_S(S)': cannot convert argument 1 from 'S' to 'S'"
as a compilation error, with the additional note
"Constructor for struct 'S' is declared 'explicit'".
According to C++17's initialization rules (Explanation of point 3) S's copy constructor should not be considered for the initialization of the S parameter of i_take_an_S; S(int, double) should rather be selected as an exact match by direct-list-initialization.
Might this be a bug in MSVC?
|
Yes, MSVC seems to be wrong here.
Generally, since C++17, the initialization rules are so that S{ 4, 2.0 } will directly initialize the parameter S s of the function. (mandatory copy elision)
There is however an exception. An implementation is allowed to introduce a copy in a function parameter or a return value if the class type has only deleted or trivial copy/move constructors and destructor (and at least one of the former non-deleted).
That you declare the copy and move constructor explicit doesn't change that they are copy/move constructors. Because you are not using = default to define them, they are not trivial. Therefore the special permission does not apply and it is wrong of MSVC to try to perform a copy.
Furthermore this special kind of copy ignores accessibility and overload resolution and therefore explicit shouldn't be relevant even if it was performed, see [class.temporary]/3.
When exactly copy elision is performed affects the ABI however, so if this is a defect in MSVC's ABI, then it might not be easily fixed.
|
73,569,633
| 73,572,092
|
Preventing network call while calling `date::make_zoned()`
|
While connected to the web, the following program, which converts a given std::chrono::system_clock::time_point to a string, takes a whopping 1s32ms to complete. However, if my machine is not connected to the web, the program requires a reasonable 54ms.
#include "date/tz.h" // Howard Hinnant's date library
#include <chrono>
#include <iostream>
std::string tp2str(const std::chrono::system_clock::time_point &tp,
const std::string &tz) {
auto z = date::make_zoned(tz, tp);
return date::format("%Y-%m-%d %H:%M:%S %Z (%A)", z);
}
int main() {
const auto now = std::chrono::system_clock::now();
std::cout << tp2str(now, "US/Eastern") << std::endl;
}
Examining the profiler output while connected reveals that the delay is on account of the function sequence date::remote_version ... Curl_http_connect which presumably attempts to connect to a remote web host.
Is there a way to prevent this behaviour while calling date::make_zoned() while still being connected to the web?
|
The documentation for Howard Hinnant's date library explains remote_version(), explains what it is for, and how to turn it off. I'll quote from the documentation:
The following functions are available only if you compile with the configuration macro HAS_REMOTE_API == 1. Use of this API requires linking to libcurl. AUTO_DOWNLOAD == 1 requires HAS_REMOTE_API == 1. You will be notified at compile time if AUTO_DOWNLOAD == 1 and HAS_REMOTE_API == 0. If HAS_REMOTE_API == 1, then AUTO_DOWNLOAD defaults to 1, otherwise AUTO_DOWNLOAD defaults to 0. On Windows, HAS_REMOTE_API defaults to 0. Everywhere else it defaults to 1. This is because libcurl comes preinstalled everywhere but Windows, but it is available for Windows.
None of these are available with USE_OS_TZDB == 1.
Most likely, you want to add a compile-time definition USE_OS_TZDB=1 to your build.
|
73,569,666
| 73,570,610
|
How can I create a 20x80 board using a dynamic array?
|
I have to do Conway's Game of Life. I need to create a 20x80 board using a dynamic array (institutional requirements).
const int ROWS = 20
const int COLUMNS = 80
void createBoard(char board[ROWS][COLUMNS]){
for(int i = 0; i < ROWS; i++){
for(int j = 0; j < COLUMNS; j++){
board[i][j] = ' ';
}
}
cout << "Created board" << endl;
}
I managed to create this board, but i'm not using a dynamic array here.
I know that to create a dynamic array i have to do the following:
char *array = new char[size];
But how i would implement the dynamic array in the for loop? I cant get my head around on how to convert the alredy made function createBoard into a dynamic array
|
In the following code snippet function createBoard dynamically allocates and initializes the board as 2D array in row-major order.
Function deleteBoard deallocates the board.
const int ROWS = 20;
const int COLUMNS = 80;
char ** createBoard( void )
{
//allocate array of pointers to board's rows
char ** board = new char *[ ROWS ];
for( int rowIdx = 0; rowIdx < ROWS; ++rowIdx ){
//allocate board's rowIdx-th row
board[ rowIdx ] = new char[ COLUMNS ];
//initialize content of allocated row
for( int columnIdx = 0; columnIdx < COLUMNS; ++columnIdx )
board[ rowIdx ][ columnIdx ] = ' '; //accessing element at rowIdx-th row and columnIdx-th columns
}
return board;
}
void deleteBoard( char ** aBoard )
{
//dealocate every row
for( int rowIdx = 0; rowIdx < ROWS; ++rowIdx )
delete [] aBoard[ rowIdx ];
//deallocate array of pointers to board's rows
delete [] aBoard;
}
|
73,570,084
| 73,576,563
|
Overalignment of an existing type in c++17
|
Given an existing type T, it is possible to overalign it on the stack with the alignas() keyword:
alignas(1024) T variable;
For dynamic allocation, we have a cumbersome syntax :
T *variable = new (std::align_val_t(1024)) T;
However, there are two problems with this syntax:
Microsoft's compiler emits error C2956 although the syntax is valid;
There seems to be no corresponding operator for destruction and aligned delete.
A workaround seems would be the definition of a new type that encapsulates T:
alignas(1024)
struct AlignedType {
T _;
};
Alignedtype variable = new AlignedType; // Properly aligned in c++17
delete variable; // Suitable aligned deallocation function called in c++17
This workaround is messy, if T's constructor has parameters, we must add some syntaxic shenanigan to forward constructor arguments, and we need to access variable._ to get the real content.
Inheritance is a bit simpler, but if T is a fundamental type (like uint32_t), we can't use inheritance.
My question is as follow:
Does something like
using AlignedType = alignas(32) T;
is possible (the above does not compile) or is it just not possible to dynamically allocate an existing type using custom alignment without resorting to syntaxic complexities ?
|
It is not possible to achieve the equivalent of your hypothetical:
using AlignedType = alignas(32) T;
There is no such thing as a type that is equivalent to another type but with different alignment. If the language allowed something like that to exist, imagine all the extra overload resolution rules we would have to add to the language.
The syntax
new (std::align_val_t(1024)) T;
should work. However, MSVC has a bug where it considers ::operator new(std::size_t, std::align_val_t) to be a "placement allocation function" rather than a "usual allocation function". This leads to the error you are seeing, where it complains that a placement allocation function matches a usual deallocation function.
(It seems that it is the standard's fault for not being clear: CWG2592. However, MSVC is to blame too; why did they choose to interpret the standard in a way that makes new (std::align_val_t(x)) T illegal?)
A workaround is to implement your own operator new and matching operator delete that delegate to the ones that you actually want to call:
struct alignment_tag {};
void* operator new(std::size_t size, alignment_tag, std::align_val_t alignment) {
return operator new(size, alignment);
}
void operator delete(void* p, alignment_tag, std::align_val_t alignment) {
operator delete(p, alignment);
}
Now you can do
T *variable = new (alignment_tag{}, std::align_val_t(1024)) T;
I personally would wrap this in something like:
template <class T>
struct AlignedDeleter {
AlignedDeleter(std::align_val_t alignment) : alignment_(alignment) {}
void operator()(T* ptr) const {
ptr->~T();
::operator delete(ptr, alignment_);
}
std::align_val_t alignment_;
};
template <typename T>
using AlignedUniquePtr = std::unique_ptr<T, AlignedDeleter<T>>;
template <typename T, typename... Args>
AlignedUniquePtr<T> make_unique_aligned(std::align_val_t alignment, Args&&... args) {
return AlignedUniquePtr<T>(new (alignment_tag{}, alignment) T(std::forward<Args>(args)...), AlignedDeleter<T>(alignment));
}
(You can always call .release() if you need to.)
|
73,570,188
| 73,570,316
|
shared_ptr to derived class from a specific base class
|
I feel like this is a pretty basic C++ question. I am trying to make a class which contains a member variable which is a shared_ptr to any class which is derived from a specific interface, but does not care which particular derived class it is. I am implementing it as follows:
class Base
{
public:
Base();
virtual void do_stuff() = 0;
};
class Derived : Base {
Derived() {}
void do_stuff() { };
};
class Foo
{
public:
Foo() {
mPtr = std::make_shared<Derived>();
}
protected:
std::shared_ptr<Base> mPtr;
};
Compiling gives the following error at the top:
error: no match for ‘operator=’ (operand types are ‘std::shared_ptr<Base>’ and ‘std::shared_ptr<Derived>’)
mPtr = std::make_shared<Derived>();
What's the proper way to do this?
Edit: Changing inheritence from Base to public Base made it work. However, trying to instantiate the class makes the linker break.
Foo foo;
Compiling gives
libfoo.so: undefined reference to `Base::Base()'
collect2: error: ld returned 1 exit status
What's this about?
|
Fixed the access specifiers and more:
#include <memory>
class Base {
public:
Base() = default; // must have implementation
// virtual dtor - not strictly needed for shared_ptr:
virtual ~Base() = default;
virtual void do_stuff() = 0;
};
class Derived : public Base { // public inheritance
public: // public access
// Derived() {} // not needed
void do_stuff() override {}; // mark override
};
class Foo {
public:
Foo() :
mPtr{std::make_shared<Derived>()} // use the member init-list
{}
private: // prefer private member variables
std::shared_ptr<Base> mPtr;
};
|
73,570,415
| 73,570,448
|
Range based for loop iteration on std::map need for explanation
|
The following code is running properly
#include<iostream>
#include<map>
#include<algorithm>
#include<string>
#include<vector> using namespace std;
int main() {
std::map<int, std::string> m;
m[0] = "hello";
m[4] = "!";
m[2] = "world";
for (std::pair<int, std::string> i : m)
{
cout << i.second << endl;
}
return 0; }
However, if I replace the for loop by
for (std::pair<int, std::string>& i : m)
{
cout << i.second << endl;
}
is not working. I get 'initializing': cannot convert from 'std::pair<const int,std::string>' to 'std::pair<int,std::string> &'
However,
for (auto& i : m)
{
cout << i.second << endl;
}
is working, and also If I do the following
for (float& i : v)
{
cout << i << endl;
}
I don't run into this problem. Why is that?
|
The key in map cannot be changed, so if you take key-value pair by reference key must be const.
In your example int variable is a key, if you take pair by reference, you could modify that key, which cannot be done with map.
Take a look at map::value type: https://en.cppreference.com/w/cpp/container/map.
|
73,571,042
| 73,571,165
|
Clarification of rationale for not having tentative definitions
|
C++17 (N4713), C.1.2 Clause 6: basic concepts, 1:
Change: C++ does not have “tentative definitions” as in C.
Rationale: This avoids having different initialization rules for fundamental types and user-defined types.
Question: what are the different initialization rules for fundamental types and user-defined types? Any examples?
Extra: here are the mutually referential file-local static objects in C:
struct X { int i; struct X* next; };
static struct X a;
static struct X b = { 0, &a };
static struct X a = { 1, &b };
In C++ this code is invalid. How to achieve the same in C++?
|
The preferred way to declare things with file scope is an anonymous namespace. Everything inside namespace {} has file scope. You can declare functions, classes, variables, etc. extern works as usual to declare a.
Note that in C++ it's not necessary to write struct X
struct X { int i; X* next; };
namespace {
extern X a;
X b = { 0, &a };
X a = { 1, &b };
}
|
73,571,223
| 73,573,059
|
Using JNI with kotlin gives UnsatisfiedLinkError
|
I'm trying to use JNI with kotlin to use c++ code in kotlin, but for some reason im getting an UnsatisfiedLinkError even though the signature should be alright since its generated using javah. Any ideas why it would do that?
Kotlin Function Declaration:
external fun initLuaScript(script: String);
javah generated header:
/*
* Class: gg_kapilarny_luakt_LuaScript
* Method: initLuaScript
* Signature: (Ljava/lang/String;)V
*/
JNIEXPORT void JNICALL Java_gg_kapilarny_luakt_LuaScript_initLuaScript
(JNIEnv *, jobject, jstring);
C++ definition of the function:
JNIEXPORT void JNICALL Java_gg_kapilarny_luakt_LuaScript_initLuaScript(JNIEnv* env, jobject self, jstring script) {
lua_State* l = luaL_newstate();
const char* cstr = env->GetStringUTFChars(script, nullptr);
bool result = checkLua(l, luaL_dostring(l, cstr));
env->ReleaseStringUTFChars(script, cstr);
if(!result) {
std::cout << "LuaKT: Failed to load the script!" << std::endl;
return;
} else {
std::cout << "LuaKT: Successfully loaded the thingy" << std::endl;
}
jclass nativeDataClazz = env->FindClass("gg/kapilarny/luakt/NativeData");
jfieldID fid = env->GetFieldID(nativeDataClazz, "nativeData", "Lgg/kapilarny/luakt/NativeData;");
jmethodID constructor = env->GetMethodID(nativeDataClazz, "<init>", "(J)V");
jobject nativeData = env->NewObject(nativeDataClazz, constructor, (jlong) (uintptr_t) l);
env->SetObjectField(self, fid, nativeData);
}
|
I figured out the problem!
The problem was that i did not load the Native DLL.
Basically i had the library load when the "Main" library class was loaded but i happened to execute a function before the actual DLL load, which caused the error!
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.