question_id
int64 25
74.7M
| answer_id
int64 332
74.7M
| title
stringlengths 20
150
| question
stringlengths 23
4.1k
| answer
stringlengths 20
4.1k
|
|---|---|---|---|---|
74,215,648
| 74,225,369
|
How to suppress unscoped enum warning in custom namespace?
|
I have some enums inside of my own namespace, yet I still get that annoying warning about "pollution in the global namespace". Why am I getting this error since they aren't even in the global namespace? How could I get rid of this warning? The exact warning is:
C26812, The enum type 'Adventure_Game::itemType' is unscoped. Prefer 'enum class' over 'enum' (Enum.3).
I have the enum declarations in my namespace inside the header file like this:
namespace Adventure_Game {
enum itemType { Consumable, Key };
enum equipType { Unarmed, Weapon, Shield, Armor };
struct invItem { string name = "(name)", desc = "(desc)"; itemType type; unsigned int amount = 0; float value = 0.0f; };
struct invEquip { string name = "(name)", desc = "(desc)"; equipType type; float low = 0.0f, high = 1.0f, weight = 0.0f, value = 0.0f; bool equip = false; };
}
I tried using enum classes too, but I don't want to use them in this case because it would break everything, and I'd have to use static casts everywhere and it would just be a mess. I would really appreciate help on dealing with this annoying warning.
Thanks :)
|
The MSVC warning C26812 is not from the C/C++ compiler. It's coming from the Static Code Analysis (/analyze) feature and specifically from the C++ Core Guidelines Checker.
The "recommended solution" that the checker is telling you about is to use C++11 strongly typed enumerations. That said, you can just suppress the warning as the C++ Core Guidelines are just recommendations.
#pragma warning(disable : 4619 4616 26812)
// C4619/4616 #pragma warning warnings
// 26812: The enum type 'x' is unscoped. Prefer 'enum class' over 'enum' (Enum.3).
You could also create a custom ruleset through the UI mentioned on Microsoft Docs if you wanted, but most of the other rules are all sensible so I find the single warning suppression to be the easiest path.
|
74,216,441
| 74,217,144
|
how to get rid of warning: control reaches end of non-void function c++
|
I wrote this code and don't understand why there´s a warning, as far as I can tell every branch has a return value. How could I fix this?
bool ValidDate(int d, int m, int a) {
if ((m < 1) || (m > 12)) {
return false;
} else {
if ((m == 1) || (m == 3) || (m == 5) || (m == 7) || (m == 8) || (m == 10) || (m == 12)) {
if (d >= 1 && d <= 31) {
return true;
} else {
return false;
}
} else if ((m == 4) || (m == 6) || (m == 9) || (m == 11)) {
if (d >= 1 && d <= 30) {
return true;
} else {
return false;
}
} else if (m == 2) {
if (a % 4 == 0) {
if (d >= 1 && d <= 29) {
return true;
} else {
return false;
}
} else {
if (d >= 1 && d <= 28) {
return true;
} else {
return false;
}
}
}
}
}
I undestand what the error means but can´t find where the problem is.
|
The warning arises because the compiler does not know that the else if (m == 2) condition will always be true when that condition is checked, as all other possibilities must have been exhausted. Since the compiler does not perform this kind of analysis, you as the programmer might as well help the compiler by explicitly saying so by just removing that check. You can still leave in a comment saying "m must be 2" to help human readers of the code.
The nested if-else branches can also be simplified considerably by noting the following equivalent constructs:
A check like
if (expr)
return true;
else
return false;
can always be replaced by
return expr;
Similarly, if you're returning at the end of a branch, the subsequent else is redundant. i.e.
if (expr1) {
// ...
return // ...;
}
else if (expr2) {
// ...
return // ...;
}
can be replaced by
if (expr1) {
// ...
return // ...;
}
if (expr2) { // the else is redundant here
// ...
return // ...;
}
Applying these transformations give a cleaner function:
bool ValidDate2(int d, int m, int a) {
if ((m < 1) || (m > 12))
return false;
if ((m == 1) || (m == 3) || (m == 5) || (m == 7) || (m == 8) || (m == 10) || (m == 12))
return d >= 1 && d <= 31;
if ((m == 4) || (m == 6) || (m == 9) || (m == 11))
return d >= 1 && d <= 30;
// Now m must be 2
return a % 4 == 0 ?
d >= 1 && d <= 29 :
d >= 1 && d <= 28;
}
Note that I've replaced the last if-else by a ternary operator ?:, though whether this makes the code more readable is subjective. Certainly nesting multiple ?: can make it harder to see the structure of the branches.
Here's a demo.
Another way of restructuring the code is to use a switch statement for all the different values of m. This reduces the complexity of the if conditionals, and can make the code easier to read:
bool ValidDate2(int d, int m, int a) {
switch(m) {
case 1: [[fallthrough]];
case 3: [[fallthrough]];
case 5: [[fallthrough]];
case 7: [[fallthrough]];
case 8: [[fallthrough]];
case 10: [[fallthrough]];
case 12: return d >= 1 && d <= 31;
case 4: [[fallthrough]];
case 6: [[fallthrough]];
case 9: [[fallthrough]];
case 11: return d >= 1 && d <= 30;
case 2: return a % 4 == 0 ?
d >= 1 && d <= 29 :
d >= 1 && d <= 28;
default: return false; // m < 1 or m > 12
}
}
Here's a demo.
|
74,216,539
| 74,216,612
|
Question about new placement related to class constructor
|
As far as I know the new keyword performs the process of allocating memory and call constructor of object.
class X{
public:
int x;
X(int a):x(a){std::cout<<"X(int a)"<<std::endl;}
~X(){std::cout<<"Delete X"<<std::endl;}
};
int main()
{
X* ptr = new X{2};
// allocate mem sizeof(X);
// call constructor of X{2};
delete(ptr);
}
Also new operator can be modified like below code.
X* ptr = static_cast<X*>( operator new(sizeof(X)) );
new(ptr) X{2};
My question is how to allocate constructor to ptr?
The "new(ptr) X{2}" is implemented like this.
It looks like there is no code that associated with class constructors.
How does this call a constructor?
|
new expressions and operator new are not the same thing. Unfortunately, they have names that suggest that operator new is like e.g. an operator overload operator+ for the + operator, which is however not the case.
A new expression may call an operator new overload, but that is only for the allocation step you are talking about, which is why the standard library placement-new operator new implementation is just a noop.
The construction of the object, including the constructor call if any is to be done, is an intrinsic part of the semantics of the new expression itself that can't be modified. There is no function implementing it and there wouldn't (generally) be any way to implement the construction of an object other than using a new expression itself.
Also, note that your replacement for the allocating new expression is not correct with your example X. X has a non-trivial destructor and is therefore not an implicit-lifetime type. This means that an X object is created only by the placement-new expression in the second line. The result of the operator new call (and also the result of the static_cast) will not be pointing to any X object.
It should be
void* mem = operator new(sizeof(X));
X* ptr = new(mem) X{2};
instead, so that ptr is guaranteed to point to the newly created object.
In general, you also need to make sure that the pointer returned from operator new is suitably aligned for the type X. Otherwise, the placement-new expression will not work correctly.
Since C++17 there is the __STDCPP_DEFAULT_NEW_ALIGNMENT__ macro against which you can test alignment to verify that the standard library's global replaceable operator new implementations without std::align_val_t parameter will guarantee suitable alignment:
static_assert(alignof(X) <= __STDCPP_DEFAULT_NEW_ALIGNMENT__);
If that is not satisfied you must use the std::align_val_t overload:
void* mem = operator new(sizeof(X), std::align_val_t{alignof(X)});
Also, note that you should then (and only then) call the corresponding operator delete with the alignment to deallocate the memory (otherwise without the alignment argument):
ptr->~X();
operator delete(mem, std::align_val_t{alignof(X)});
The allocating new X{2} and delete(ptr); expressions do all of this decision-making on which operator new and operator delete to use internally, based on knowing the type of the operand.
(The above alignment consideration applies to the global replaceable operator new/operator delete overloads. A user-replacement of these operator new overloads must also satisfy the same requirements. However, a custom overload (rather than a replacement) may have other behavior and might be called from these expressions as a result of overload resolution.)
|
74,217,172
| 74,219,814
|
How to read alpha channel from .webm video using ffmpeg in c++
|
Background
I have a .webm file (pix_fmt: yuva420p) converted from .mov video file in order to reduce file size and I would like to read the video data using c++, so I followed using this repo as a reference.
This works perfectly on .mov video.
Problem
By using same repo, however, there is no alpha channel data (pure zeros on that channel) for .webm video but I can get the alpha data from .mov video.
Apparently many people already noticed that after the video conversion, ffmpeg somehow detect video as yuv420p + alpha_mode : 1 and thus alpha channel is not used but there is no one discuss about workaround of this.
I tried forcing pixel format during this part to use yuva420p but that just broke the whole program.
// Set up sws scaler
if (!sws_scaler_ctx) {
auto source_pix_fmt = correct_for_deprecated_pixel_format(av_codec_ctx->pix_fmt);
sws_scaler_ctx = sws_getContext(width, height, source_pix_fmt,
width, height, AV_PIX_FMT_RGB0,
SWS_BILINEAR, NULL, NULL, NULL);
}
I also verified my video that it contains alpha channel using other source so I am sure there is alpha channel in my webm video but I cannot fetch the data using ffmpeg.
Is there a way to fetch the alpha data? Other video format or using other libraries work as well as long as it does have some file compression but I need to access the data in c++.
Note: This is the code I used for converting video to webm
ffmpeg -i input.mov -c:v libvpx-vp9 -pix_fmt yuva420p output.webm
|
You have to force the decoder.
Set the following before avformat_open_input()
AVCodec *vcodec;
vcodec = avcodec_find_decoder_by_name("libvpx-vp9");
av_fmt_ctx->video_codec = vcodec;
av_fmt_ctx->video_codec_id = vcodec->id;
You don't need to set pixel format or any scaler args.
This assumes that your libavcodec is linked with libvpx.
|
74,217,816
| 74,230,127
|
NodeJS function called from v8 SWIG C++ seg. faults
|
I have some c++ code which I am compiling with SWIG (you can clone the code here), it defines the javascript "theFunction" which will be executed from C++ once setup :
v8::Persistent<v8::Function> theFunction;
/** Class to test the wasm setup
*/
class Test {
...
}
I am extending it in my swig Test.i template to setup the javascript callback so that C++ can call it like so :
%extend Test {
void setCallback(const std::string& fnName){
SWIGV8_HANDLESCOPE();
v8::Isolate* isolate = v8::Isolate::GetCurrent();
// first find the global js function
v8::Local<v8::Value> fnObj = SWIGV8_CURRENT_CONTEXT()->Global()->Get(SWIGV8_CURRENT_CONTEXT(), v8::String::NewFromUtf8(isolate, fnName.c_str())).ToLocalChecked();
if (!fnObj->IsFunction()){
printf("setupCallback : error no function found\n");
return;
} else
printf("setupCallback : %s function found\n", fnName.c_str());
v8::Local<v8::Function> func = v8::Local<v8::Function>::Cast(fnObj);
theFunction.Reset(isolate, func);
// now call the global javascript function from C++
v8::Local<v8::Function> func2 = v8::Local<v8::Function>::New(isolate, theFunction);
if (!func2.IsEmpty()) {
const unsigned argc = 1;
v8::Local<v8::Value> argv[argc] = { v8::String::NewFromUtf8(isolate, "hello world") };
v8::Local<v8::Value> ret;
func2->Call(SWIGV8_CURRENT_CONTEXT(), ret, argc, argv);
}
}
}
When I run my javascript, the function is found from the globals but it hange when C++ tries to execute it :
global.fnName = function (str) {
console.log(str);
}
test.setCallback("fnName");
But unfortunately C++ seg. faults before executing the javascript "theFunction" :
setupCallback : fnName function found
Segmentation fault (core dumped)
Is there something missing from the v8 code to make the callback work ?
In the supplied test code, the file swigCNodejw_wrap_wrap.cxx is autogenerated which has the v8 code inside.
|
Change the following Call method :
func2->Call(SWIGV8_CURRENT_CONTEXT(), ret, argc, argv);
To this :
func2->Call(SWIGV8_CURRENT_CONTEXT(), func2, argc, argv);
|
74,218,474
| 74,223,140
|
Is there any way to create a lookup table at compile time or in preprocessor for a factory creational pattern in c++?
|
I have developed code for image processing that can load, with the help of a json file, different image processing processes depending on the data in the json file. When the file is readed, in some point is reached a function that depending the type of process specified in file it creates the corresponding process that is derived from a base process:
inline BaseProcess* getSpecificObject(QString id, QJsonObject &proc)
{
if(id == "ShowImage")
{
return BaseProcess::getProcessFromJson<ShowImage>(proc);
}
else if(id == "OtherProcess")
{
return BaseProcess::getProcessFromJson<OtherProcess>(proc);
}
else
{
qDebug() << "Unknow ModelObject specified in json file";
return nullptr;
}
}
I have in a separate file the above code and it works as expected but there is the real question:
Is there any way to have in BaseProcess header file a static lookup table or a map that can be populated (in compile time) from every derived process header file?
I bassically want to separate the "if-else if" statement from the separate header file to the correspondent portion of the derived classes in order that I can add or remove processes header files without the necessity of modify the code in that file.
The need for this is that this code is intended to be generic and can be ported to another project as-is and you only need to change the processes that the project is going to use. So to avoid human error, I want another developer who has this code to only have to worry about the specific process being developed.
I actually tried to use constexpr and templates and also macros metaprogramming for creating in any way at compile time or with preprocessor the desired lookup table but I have no success.
With macros I know that I can create arrays but I need a behaviour like a map, you specify a key (ex. "ShowImage" string) and it returns the type of the object to be created.
I am using c++17 with msvc 2019 compiler in QT 5.15.
|
You might change the if-else chain by a map.
std::/*unordered_*/map<QString, std::function<BaseProcess*(QJsonObject &)>> processFactory;
BaseProcess* getSpecificObject(QString id, QJsonObject &proc)
{
if (auto it = processFactory.find(id); it != processFactory.end()) {
return it->second(proc);
}
qDebug() << "Unknow ModelObject specified in json file";
return nullptr;
}
Then either your factory has to know each individual process,
std::map<QString, std::function<BaseProcess*(QJsonObject &)>> processFactory = {
{"ShowImage", &BaseProcess::getProcessFromJson<ShowImage>},
{"OtherProcess", &BaseProcess::getProcessFromJson<OtherProcess>}
};
or your process has to know the factory.
That map can be fed in separated files by global registry.
(it is not compile time done, but it is done before main)
bool register(QString id, std::function<BaseProcess*(QJsonObject &)> f)
{
return processFactory.emplace(id, f);
}
And then, in ShowImage.cpp
static const bool registered = register("ShowImage", &BaseProcess::getProcessFromJson<ShowImage>);
and in OtherProcess.cpp
static const bool registered = register("OtherProcess", &BaseProcess::getProcessFromJson<OtherProcess>);
|
74,218,524
| 74,218,699
|
Allocating on heap and then use shared pointer, how to free the data
|
I have the following scenario.
I have a allocated a chunk of data using new, then assigned the block of data to a smart pointer in the class DNA_ImageBlob, how do I free T*blob data ?
template <class T>
void DNA_ImageBlob<T>::Reset(int h, int w, int c)
{
SetWidth(w);
SetHeight(h);
SetDepth(c);
T *blob = alocData(h, w, c);
data_ = std::make_shared<T>(blob);
compressed_ = false;
}
template <class T>
T *DNA_ImageBlob<T>::alocData(unsigned int h, unsigned int w, unsigned int d)
{
if (w * h * d == 0)
{
return nullptr;
}
T *blob = new T[w * h * d];
return blob;
}
and the usage of that class is in :
DNA_Mask::DNA_Mask(int id, int xSize, int ySize, int zSize)
: DNA_BinaryImageBlob(xSize, ySize, zSize)
{
mID = id;
height_ = xSize;
width_ = ySize;
depth_ = zSize;
data_ = std::shared_ptr<BinaryType>(this->alocData(height_, width_, depth_));
|
Here is an example, if you have questions just ask.
#include <vector>
#include <cstdint>
#include <iostream>
// Blob is a RAII class, meaning its destructor will
// cleanup all resources it owns (in this case memory held by vector)
class Blob
{
public:
Blob(std::size_t width, std::size_t depth, std::size_t height) :
m_blob_data(width * depth * height, 0) // allocates memory and sets it to 0. (last parameter is optional)
{
}
// provide read only access to data
const auto& get_data() const noexcept
{
return m_blob_data;
}
~Blob() = default;
private:
std::vector<std::uint8_t> m_blob_data; // when destructor is called, vector gets destructed which will free memory
};
int main()
{
// start a scope (yes this has meaning in C++, and is important with respect to RAII)
{
Blob blob{ 2,2,2 };
std::cout << blob.get_data().size() << "\n"; // 2*2*2 = 8 items.
} // blob goes out of scope here, so from here on memory is freed.
return 0;
}
|
74,219,608
| 74,219,889
|
Determine duplicates/pairs in an array in C++
|
I have been doing this problem for 2 days now, and I still can't figure out how to do this properly.
In this program, I have to input the number of sticks available (let's say 5). Then, the user will be asked to input the lengths of each stick (space-separated integer). Let's say the lengths of each stick respectively are [4, 4, 3, 3, 4]. Now, I have to determine if there are pairs (2 sticks of same length). In this case, we have 2 (4,4 and 3,3). Since there are 2 pairs, we can create a canvas (a canvas has a total of 2 pairs of sticks as the frame). Now, I don't know exactly how to determine how many "pairs" there are in an array. I would like to ask for your help and guidance. Just note that I am a beginner. I might not understand complex processes. So, if there is a simple (or something that a beginner can understand) way to do it, it would be great. It's just that I don't want to put something in my code that I don't fully comprehend. Thank you!
Attached here is the link to the problem itself.
https://codeforces.com/problemset/problem/127/B
Here is my code (without the process that determines the number of pairs)
#include<iostream>
#include<cmath>
#define MAX 100
int lookForPairs(int numberOfSticks);
int main(void){
int numberOfSticks = 0, maxNumOfFrames = 0;
std::cin >> numberOfSticks;
maxNumOfFrames = lookForPairs(numberOfSticks);
std::cout << maxNumOfFrames << std::endl;
return 0;
}
int lookForPairs(int numberOfSticks){
int lengths[MAX], pairs = 0, count = 0, canvas = 0;
for(int i=0; i<numberOfSticks; i++){
std::cin >> lengths[i];
}
pairs = floor(count/2);
canvas = floor(pairs/2);
return count;
}
I tried doing it like this, but it was flawed. It wouldn't work when there were 3 or more integers of the same number (for ex. [4, 4, 3, 4, 2] or [5. 5. 5. 5. 6]). On the first array, the count would be 6 when it should only be 3 since there are only three 4s.
for(int i=0; i<numberOfSticks; i++){
for (int j=0; j<numberOfSticks; j++){
if (lengths[i] == lengths[j] && i!=j)
count++;
}
}
|
Instead of storing all the lengths and then comparing them, count how many there are of each length directly.
These values are known to be positive and at most 100, so you can use an int[100] array for this as well:
int counts[MAX] = {}; // Initialize array to all zeros.
for(int i = 0; i < numberOfSticks; i++) {
int length = 0;
std::cin >> length;
counts[length-1] += 1; // Adjust for zero-based indexing.
}
Then count them:
int pairs = 0;
for(int i = 0; i < MAX; i++) {
pairs += counts[i] / 2;
}
and then you have the answer:
return pairs;
|
74,219,738
| 74,220,390
|
why does memory_order_seq_cst didn't protect my atomic-operation running sequence?
|
#include <assert.h>
#include <atomic>
#include <iostream>
#include <thread>
std::atomic_bool b(false);
std::atomic_bool lock{false};
void producer() {
b.store(true, std::memory_order_seq_cst);
lock.store(true, std::memory_order_seq_cst);
}
void consume() {
while (!lock.load(std::memory_order_seq_cst))
;
assert(b.load(std::memory_order_seq_cst));
b.store(false, std::memory_order_seq_cst);
lock.store(false, std::memory_order_seq_cst);
}
int main() {
std::thread t1([&]() {
while (true)
consume();
});
std::thread t2([&]() {
while (true)
producer();
});
t1.join();
t2.join();
}
Assert in consume should never fail, memory_order_seq_cst guarentee that atmoic operation run in the order that they are wrote;
But assert fail happened :(
|
void producer() {
b.store(true, std::memory_order_seq_cst); // 1
lock.store(true, std::memory_order_seq_cst); // 2
}
void consume() {
while (!lock.load(std::memory_order_seq_cst))
; // 3
assert(b.load(std::memory_order_seq_cst)); // 4
b.store(false, std::memory_order_seq_cst); // 5
lock.store(false, std::memory_order_seq_cst); // 6
}
The following steps will make the assert fail: 1 -> 5 -> 6 -> 2 -> 3 -> 4
|
74,222,925
| 74,223,441
|
Building red-black tree with vector of lists in C++
|
vector<list<Nodo<string>*>> lista;
i have this vector of lists and I'm trying to write a method to insert elements into it
template <typename T> void HashRBT<T>:: riempimento()
{
for(auto &it:vett_dati)
{ int key=it.first;
string value=(it.second);
int id=hashFunctionDivsion(key);
Nodo<T> *x=rb->searchNodo(rb->getRoot(),id);
if(x==rb->getNillT())
{
rb->insertNodoRB(id,value);
}
else {
lista.resize(getDim());
Nodo<T> *y= new Nodo<T>(id,value);
lista.at(id).push_front(y); //inserimento in testa
}
}
Print_Lista();
}
now in the else block is where I go to insert the elements in this vector of lists but I don't understand why if I comment the resize statement this doesn't work and I get an error like this: vector :: _ M_range_check.
I would like someone to explain to me what happens in memory?
what am i allocating with that resize?
|
Break the problem down and look at it without the logic of your rb tree.
std::vector<int> vec{10,20}; // vector of size 2
vec.at(0); // fine: element 0 exists.
vec.at(1); // fine: element 1 exists.
vec.at(2); // will throw because vector has size 2, so only elements 0 and 1
vec.resize(3); // now vector has size 3: elements 0, 1, 2
vec.at(2); // fine: element 2 exists.
Whether you store an int or a std::list<Node<T>> in it doesn't matter for std::vector's fundamental logic.
So, if you access an element with std::vector::at that doesn't exist you will get an error:
std::out_of_range if !(pos < size()).
Similarly, std::vector::operator[] will not work but fail silently. Omitting the check has performance benefits.
Unlike std::map::operator[], this operator never inserts a new element into the container. Accessing a nonexistent element through this operator is undefined behavior.
|
74,223,378
| 74,229,330
|
wxWidgets - setting an image as background
|
I am trying to set an image as the background but I have no idea how to do it.
It has been hours. The first lines of code I copied from somebody and they seemed alright but it still doesn't work.
The error that I have appears in a separate window and says it cannot load/find the image.
#include "MainFrame.h"
#include <wx/wx.h>
#include "FrameTwo.h"
#include "wx/custombgwin.h"
#include <wx/dcclient.h>
enum IDs {
BUTTON_ID=2
};
wxBEGIN_EVENT_TABLE(MainFrame, wxFrame)
EVT_BUTTON(BUTTON_ID, MainFrame::OnLoginClicked)
wxEND_EVENT_TABLE()
MainFrame::MainFrame(const wxString& title) : wxFrame(nullptr, wxID_ANY, title) {
wxPanel* panel = new wxPanel(this);
wxClientDC dc(panel);
wxMemoryDC mdc;
int w, h;
w = 600;
h = 600;
dc.GetSize(&w, &h);
wxImage img(wxT("C:\\Users\\ALEX\\Desktop\\background_uno.jpeg"), wxBITMAP_TYPE_JPEG);
wxBitmap cat(img.Scale(w, h, wxIMAGE_QUALITY_HIGH));
mdc.SelectObject(cat);
dc.Blit(0, 0, cat.GetWidth(), cat.GetHeight(), &mdc, 0, 0, wxCOPY, 0);
mdc.SelectObject(wxNullBitmap);
//panel->SetBackgroundColour(wxColour(255, 102, 102)); //light orange-red
//panel->SetBackgroundColour(wxColour(155, 202, 62)); //fresh green
wxButton* login = new wxButton(panel, BUTTON_ID, "Log in", wxPoint(340, 350), wxSize(100, 35));
CreateStatusBar(); //creates a bar in the bottom of the frame
//wxBoxSizer* sizer = new wxBoxSizer(wxALIGN_CENTER_VERTICAL);
//sizer -> Add(login, 1, wxEXPAND | wxALL, 10);
wxButton* exit = new wxButton(panel, wxID_OK, "Exit", wxPoint(340, 390), wxSize(100, 35));
exit->Bind(wxEVT_BUTTON, &MainFrame::OnExitClicked, this);
panel->SetFont(panel->GetFont().Scale(2.5));
wxStaticText* staticText = new wxStaticText(panel, wxID_ANY, "UNO DELUXE", wxPoint(300, 40));
panel->SetFont(panel->GetFont().Scale(0.5));
wxStaticText* username = new wxStaticText(panel, wxID_ANY, "Enter username: ", wxPoint(180, 250));
wxTextCtrl* textCtrl = new wxTextCtrl(panel, wxID_ANY, " ", wxPoint(300, 250), wxSize(250, -1));
wxStaticText* password = new wxStaticText(panel, wxID_ANY, "Enter password: ", wxPoint(180, 290));
wxTextCtrl* textCtrl2 = new wxTextCtrl(panel, wxID_ANY, "", wxPoint(300, 290), wxSize(250, -1), wxTE_PASSWORD);
}
|
I can't help you with the program not being able to find the image - the image needs to exist either in the application's working folder or on an absolute path.
You can get around this by including the image in the executable. Unfortunately there is no consistent cross-platform way to do this - windows and mac have separate ways of doing this but GTK has no way at all. Here are some suggestions from the wiki for some possible workarounds.
Regardless of the method used to get the image into the application, you should not draw it with a client DC. Instead you should handle the paint event for the panel and draw the image in the handler for that event. Here's an example:
#include <wx/wx.h>
class ImageBgFrame: public wxFrame
{
public:
ImageBgFrame(wxFrame *frame, const wxString& title);
private:
void OnImagePanelPaint(wxPaintEvent&);
void OnExit(wxCommandEvent&);
void CreateScaledBg();
wxPanel* m_imagePanel;
wxImage m_image;
wxBitmap m_scaledBg;
};
ImageBgFrame::ImageBgFrame(wxFrame *frame, const wxString& title)
:wxFrame(frame, wxID_ANY, title)
{
::wxInitAllImageHandlers();
// Try to load the image.
m_image = wxImage("Thinking-of-getting-a-cat.png", wxBITMAP_TYPE_PNG);
if ( !m_image.IsOk() )
{
return;
}
// Create the controls.
m_imagePanel = new wxPanel(this, wxID_ANY);
CreateScaledBg();
wxButton* exit = new wxButton(m_imagePanel, wxID_OK, "Exit");
// Arrang the controls with a sizer.
wxBoxSizer* szr = new wxBoxSizer(wxVERTICAL);
szr->Add(exit,wxSizerFlags().Border(wxALL));
m_imagePanel->SetSizer(szr);
Layout();
// Bind Event Handlers
m_imagePanel->Bind(wxEVT_PAINT, &ImageBgFrame::OnImagePanelPaint, this);
exit->Bind(wxEVT_BUTTON, &ImageBgFrame::OnExit, this);
}
void ImageBgFrame::OnImagePanelPaint(wxPaintEvent&)
{
if ( m_imagePanel->GetSize() != m_scaledBg.GetSize() )
{
CreateScaledBg();
}
wxPaintDC dc(m_imagePanel);
dc.DrawBitmap(m_scaledBg,0,0);
}
void ImageBgFrame::OnExit(wxCommandEvent&)
{
Close();
}
void ImageBgFrame::CreateScaledBg()
{
wxSize sz = m_imagePanel->GetSize();
m_scaledBg = wxBitmap(m_image.Scale(sz.GetWidth(), sz.GetHeight(),
wxIMAGE_QUALITY_NORMAL));
}
class ImageBgApp : public wxApp
{
public:
bool OnInit()
{
ImageBgFrame* frame = new ImageBgFrame(NULL, "Image BG");
frame->Show();
return true;
}
};
wxIMPLEMENT_APP(ImageBgApp);
This assumes that the image "Thinking-of-getting-a-cat.png" is in the application's working folder. On GTK it looks like this:
I've also made a few other changes to the code you gave:
I'm using Bind to connect the event handlers instead of static event tables.
I'm using default sizes for the controls instead of setting the size explicitly and laying out the controls in a sizer instead of setting explicit location.
Both of these changes are a more modern way of doing things.
|
74,223,559
| 74,223,606
|
ifstream unable to load in two or more files correctly [C++]
|
I'm making an "Identity generator" where the computer randomly picks lines from .txt files.
Although it works fine with the first part of the code, when I repeat that code and change the variables it still uses the old txt file.
Expected result:
Full name: {random first name} {random sur name}
Address: {random address}
Actual result:
Full name: {random first name} {random first name}
Address: {random first name}
My code
cout << "Full name: ";
srand((unsigned) time(NULL));
std::ifstream firstnamefile("englishfirstname.txt");
int total_lines = 0;
while(getline(firstnamefile,line))
{
total_lines++;
lines.push_back(line);
}
int random_number = rand() % total_lines;
cout << lines[random_number] << " ";
//--- Surname ---
srand((unsigned) time(NULL));
std::ifstream surnamefile("englishsurname.txt");
total_lines = 0;
while(getline(surnamefile,line))
{
total_lines++;
lines.push_back(line);
}
random_number = rand() % total_lines;
cout << lines[random_number] << endl;
// --- Address ---
cout << "Address: ";
srand((unsigned) time(NULL));
std::ifstream addressfile("addresses.txt");
total_lines = 0;
while(getline(addressfile,line))
{
total_lines++;
lines.push_back(line);
}
random_number = rand() % total_lines;
cout << lines[random_number] << endl;
The .txt files are just a list of names for example:
John
Michael
Matthew
Etc...
|
You never clear lines, so the start of the vector is always going to be your list of first names. You do reset total_lines to 0 before reading each file, so your random range is 0..total_lines each time, so you're always picking from the start of the array, which again, is all first names.
Assuming lines is a std::vector, you need lines.clear() along with setting total_lines to 0 to reset the state between each file read.
Note that you have three identical blobs of code that differ only by a file name; this is a very good candidate for a function that accepts a file as input, and returns a random line as output.
|
74,223,602
| 74,224,913
|
Should i use std::string in a web server for parsing a client request?
|
I'm coding a little HTTP 1.1 web server in C++98 (c++ version mandated by my school) and i haven't make a decision about which data type i'm gonna use to perform the request parsing, and how.
Since i'll be receiving read-only (by read-only i mean that i don't have to modify the buffer) data from a user-agent, would it make sense to use std::string to store the incoming data ?
HTTP syntax is very straightforward, and can be parse using a finite state machine. Iterating over a const char * seems enough and doesn't make any allocations, i can use buffer that recv gives me.
On the other hand, i could use std::string facilities like find and substr to parse the request, but that would lead to memory allocations.
My server doesn't need to be as efficient as nginx, but i'm still worried about the performance of my application.
I'm eager to know your thoughts.
|
Definitely. It's a school project, not a high-performance production server (in which case you'd be using a more modern C++ variant).
The biggest performance problem you'd typically have with std::string is not parsing, but string building. a + b + c + d + e can be rather inefficient. Details, really: just start by writing a correct implementation, and then see which parts are too slow in testing. There are very few projects, even in commercial software development, where that's not the right approach.
|
74,224,218
| 74,224,431
|
What happens when glDraw*Instanced() is called with primcount greater than how many times a vertex attribute can get updated?
|
In my opengl program (opengl 3.3 core profile) i have an array with N float elements in it. I pass the array to a VBO and specify it as an array of vertex attributes at index 0. Here data is the array:
glBufferData(GL_ARRAY_BUFFER, sizeof(data), data, GL_STATIC_DRAW);
glVertexAttribPointer(0, 1, GL_FLOAT, GL_FALSE, sizeof(float), (void*)0);
I want the attribute to update once per instance, so
glVertexAttribDivisor(0, 1);
The question is what happens when the program is told to draw N * N instances of an object with a call to glDraw*Instanced()?
There are only N elements in the array so I guess the first N instances will be drawn as expected. But what happens after that? Will it cycle through the array again to draw another set of N instances and then again to draw another one and then another one and so on until it draws all of them? Or will it just treat attribute 0 as if it was set to NULL or something for the remaining N * (N - 1) instances? If so, is there a way to make it cycle through the array as described above?
The only solution i can think of is not making the array instanced and changing its size to N * N with values inside repeating N times but that would take up unnecessary amount of memory.
|
You get the exact same thing that happens if you render more vertices than there is storage in the buffer objects: out-of-bound reads. If robust memory accesses are enabled in your context, then this value will be either zero or some other value within the storage of the buffer object. Without robust accesses however, out-of-bound reads result in undefined behavior and "may result in GL interruption or termination".
So... don't do that.
If so, is there a way to make it cycle through the array as described above?
Does it have to be that way exactly?
The divisor you passed to glVertexAttribDivisor is a divisor. If it isn't zero, it takes the instance index and divides it by the divisor, and that result is used as the index into the instance array. So if you pass N as the divisor, Which means that the first N instances get the value from index 0, the next N instances get the value from index 1, etc.
This will iterate in a different order from the "cycling" behavior you want, but it ultimately does the same thing.
Note that, unless there is some other instance array involved with a different divisor or your vertex shader uses gl_InstanceID, all this will do is render the same thing N times. Which probably isn't useful.
|
74,224,268
| 74,232,322
|
Link Static CUDA Library using CMake
|
I have the following project structure:
| CMakeLists.txt (1)
| main.cpp
| cudalib/
| CMakeLists.txt (2)
| cppfunction.cpp
| cudafunction.cu
| cudalib.h
and I am trying to build the content of the cudalib folder as a static library that is afterwards linked to by the main project. This process usually works if the static library has no CUDA code in it. But as soon as it has CUDA code, I get the following linker error when building the mainapp:
cudalib.lib(cudafunction.cu.obj) : error LNK2019: unresolved external symbol cudaMalloc referenced in function "void __cdecl cudafunction(void)" (?cudafunction@@YAXXZ)
cudalib.lib(cudafunction.cu.obj) : error LNK2019: unresolved external symbol __cudaRegisterLinkedBinary_c8e43a78_15_cudafunction_cu_4243b124 referenced in function "void __cdecl __nv_cudaEntityRegisterCallback(void * *)" (?__nv_cudaEntityRegisterCallback@@YAXPEAPEAX@Z)
Only building the library works fine. The error only happens when I try to link the final app with my cudalib library.
A full MWE has the following file contents:
CMakeLists.txt (1)
cmake_minimum_required(VERSION 3.22)
project(MAINAPP CUDA CXX)
add_subdirectory(cudalib) # MYCUDALIB is defined here
add_executable(mainapp main.cpp)
target_link_libraries(mainapp ${MYCUDALIB})
CMakeLists.txt (2)
cmake_minimum_required(VERSION 3.22)
project(CUDALIB CUDA CXX)
add_library(cudalib STATIC cppfunction.cpp cudafunction.cu)
set_target_properties(cudalib PROPERTIES CUDA_SEPARABLE_COMPILATION ON)
# define so we know the path to the lib in the upper project
set(MYCUDALIB ${CMAKE_CURRENT_BINARY_DIR}/cudalib.lib PARENT_SCOPE)
main.cpp
#include "cudalib/cudalib.h"
int main(){
cudafunction();
cppfunction();
return 0;
}
cppfunction.cpp
#include <iostream>
void cppfunction(){
std::cout << "In CPPFunction";
}
cudafunction.cu
#include <iostream>
void cudafunction(){
std::cout << "In CudaFunction";
void* ptr;
cudaMalloc(&ptr, 1000);
}
cudalib.h
void cppfunction();
void cudafunction();
Does somebody know why this error happens? How could I improve this project structure?
|
I found the answer after hours of trouble.
There are two resources that gave me the hints I needed:
https://developer.nvidia.com/blog/building-cuda-applications-cmake/
https://gist.github.com/gavinb/c993f71cf33d2354515c4452a3f8ef30
You have to link the mainapp against the CUDA runtime:
CMakeLists.txt (1)
cmake_minimum_required(VERSION 3.22)
project(MAINAPP CUDA CXX)
find_library(CUDART_LIBRARY cudart ${CMAKE_CUDA_IMPLICIT_LINK_DIRECTORIES})
add_subdirectory(cudalib) # MYCUDALIB is defined here
add_executable(mainapp main.cpp)
target_link_libraries(mainapp ${MYCUDALIB} ${CUDART_LIBRARY})
and if you have CUDA_SEPARABLE_COMPILATION==ON you have to enable device-linking before linking the final executable like this:
CMakeLists.txt (2)
cmake_minimum_required(VERSION 3.22)
project(CUDALIB CUDA CXX)
add_library(cudalib cppfunction.cpp cudafunction.cu)
set_target_properties(cudalib PROPERTIES CUDA_SEPARABLE_COMPILATION ON)
set_target_properties(cudalib PROPERTIES CUDA_RESOLVE_DEVICE_SYMBOLS ON)
# so we know the path to the lib in the other upper project
set(MYCUDALIB ${CMAKE_CURRENT_BINARY_DIR}/cudalib.lib PARENT_SCOPE)
|
74,224,485
| 74,229,054
|
Is it possible to iterate through a vector of vectors columnwise?
|
I have a vector of vectors of strings. I want to find the lengths of the longest string in each column. All the subvectors are of the same length and have an element stored in it, so it would be rather easy to find it with two for loops and reversed indices.
vector<vector<string>> myvec = {
{ "a", "aaa", "aa"},
{"bb", "b", "bbbb"},
{"cc", "cc", "ccc"}
};
But is it possible to do it with iterators without using indices?
|
Here is a solution using C++20 ranges and lambdas that is similar to Nelfeals answer:
// returns a function returning the i-th element of an iterable container
auto ith_element = [](size_t i) {
return [i](auto const& v){
return v[i];
};
};
// returns a range over the i-th column
auto column = [ith_element](size_t i) {
return std::views::transform(ith_element(i)); // returns a range containing only the i-th elements of the elements in the input range
};
// returns the size of something
auto length = [](auto const& s){ return s.size(); };
// returns the max length of the i-th column
auto max_length_of_col = [column, length](auto const& v, size_t i) {
return std::ranges::max(
v | column(i) | std::views::transform(length)
);
};
I personally like how the ranges library helps you convey intent with your code, rather than having to prescribe the procedure to achieve your goal.
Note that if you replace the body of inner lambda in ith_element with the following block, it will also work for iterable containers without random access.
auto it = v.cbegin();
std::ranges::advance(it, i);
return *it
Demo
As a final remark, this solution lets you iterate over one column given an index of the column. I would advise against implementing a column iterator for vector<vector>: The existence of an iterator implies that something exists in memory that you can iterate over. The existence of columns is only implied by the additional information that you have given us, namely that all rows have the same length. If you do want iterators both for columns and rows, I would wrap your container in a new type (usually called matrix or similar) that properly conveys that intent. Then you can implement iterators for that new type, see Calath's answer.
EDIT:
I realized that my argument against a column iterator can be used as an argument against the column function in this answer as well. So here is a solution that let's you iterate over columns in a range-based for loop intead of iterating over column indices:
for (auto column : columns(myvec)){
std::cout << max_length(column) << std::endl;
}
|
74,225,934
| 74,226,088
|
new (ptr) T() == static_cast<T*>(ptr)?
|
I want to implement something like rust's dyn trait(I know this doesn't work for multiple inheritance)
template<template<typename> typename Trait>
class Dyn
{
struct _SizeCaler:Trait<void>{ void* _p;};
char _buffer[sizeof(_SizeCaler)];
public:
template<typename T>
Dyn(T* value){
static_assert(std::is_base_of_v<Trait<void>,Trait<T>>
,"error Trait T,is not derive from trait<void>"
);
static_assert(sizeof(_buffer) >= sizeof(Trait<T>)
,"different vtable imple cause insufficient cache"
);
new (_buffer)Trait<T>{value};
}
Trait<void>* operator->(){
return static_cast<Trait<void>*>(reinterpret_cast<void*>(_buffer));
}
};
template<template<typename> typename Trait,typename T>
struct DynBase:Trait<void>
{
protected:
T* self;
public:
DynBase(T* value):self(value){}
};
struct Point{
double x,y;
};
struct Rect{
Point m_leftDown,m_rightUp;
Rect(Point p1,Point p2){
m_leftDown = Point{std::min(p1.x,p2.x),std::min(p1.y,p2.y)};
m_rightUp = Point{std::max(p1.x,p2.x),std::max(p1.y,p2.y)};
}
};
template<typename = void>
struct Shape;
template<>
struct Shape<void>
{
virtual double area() = 0;
};
template<>
struct Shape<Rect> final
:DynBase<Shape,Rect>
{
using DynBase<Shape,Rect>::DynBase;
double area() override{
return (self->m_rightUp.x - self->m_leftDown.x )
* (self->m_rightUp.y - self->m_leftDown.y);
}
};
void printShape(Dyn<Shape> ptr)
{
std::cout << ptr->area();
}
int main()
{
Rect r{{1,2},{3,4}};
printShape(&r);
}
but I found that the c++standard may not be able to derive “new (ptr) T() == static_cast<T*>(ptr)“?
So conversely, “static_cast<T*>(ptr) == new (ptr) T()” cannot prove
Trait<void>* operator->(){ return static_cast<Trait<void>*>(reinterpret_cast<void*>(_buffer)); }
Other attempts
a abstruct class can't be a union member(why?),and I can't use placement new to calculate offset at compile time, because Trait is an abstract class.
So does the standard specify the validity of static_cast<T*>(ptr) == new (ptr) T()?
|
The expressions new (ptr) T() and static_cast<T*>(ptr), where ptr has type void*, will return the same address, ptr (as long as T is a scalar type -- array types are allowed to have overhead when dynamically allocated)
However, the semantics are quite different.
In new (ptr) T(), a new object of type T is created at the specified address. Any prior value is lost.
In static_cast<T*>(ptr), there must already be an object of type T at the specified address, or else you are setting yourself up to violate the strict aliasing rule.
If ptr had a type other than void*, it would need to be related to T by inheritance or trivial adjustments (such as changing signedness or adding const or volatile), and the address might have an offset applied as a result of multiple inheritance. But that will never happen in the code of this question.
|
74,226,129
| 74,248,507
|
Why is IExplorerCommand::Invoke() no longer being called?
|
I have created a File Explorer context menu extension that uses the IExplorerCommand interface to add menu commands to the Windows 11 context menu.
This has been working fine, but after the last Windows update, it no longer works properly.
Although the menu commands still appear, nothing happens when I click on any of them. I've added logging and I can see that IExplorerCommand::Invoke() is no longer being called.
Strangely, if I select the "Show more options" menu to get the legacy Windows 10 context menu, the commands work fine from that menu, it is only in the new Windows 11 context menu that they don't work.
I have tried running File Explorer in a debugger while selecting my menu commands, and I get lines like this in the output window when I click on the command:
onecore\com\combase\dcomrem\stdid.cxx(726)\combase.dll!00007FF9EB9947F5: (caller: 00007FF9C22E1E38) ReturnHr(2627) tid(67bc) 8001010E The application called an interface that was marshalled for a different thread.
I'm guessing this is the reason why my commands are not being called. Does anyone have any suggestions for what is causing this? Could it be a bug in File Explorer?
I've tried both STA and MTA threading models, and changing this made no difference.
|
Well, after wasting hours on this I finally have a solution!
My code was based on the PhotoStoreContextMenu sample code here:
https://github.com/microsoft/AppModelSamples/tree/master/Samples/SparsePackages/PhotoStoreContextMenu
This uses the Windows Runtime C++ Template Library (WRL), and defines the base classes used by the class like this:
class TestExplorerCommandBase : public RuntimeClass<RuntimeClassFlags<ClassicCom>, IExplorerCommand, IObjectWithSite>
The change that fixed it for my code was to use WinRtClassicComMix instead of ClassicCom, i.e.
class TestExplorerCommandBase : public RuntimeClass<RuntimeClassFlags<WinRtClassicComMix >, IExplorerCommand, IObjectWithSite>
I'm pretty sure this problem started when I installed KB5019509, which is the Windows update that changes File Explorer so that it now has tabs.
|
74,226,345
| 74,226,507
|
Make simple calculations with C++ pre-processor
|
In my C++ application I configure some features in this way:
#define LED_SIZE 113
#define SEGMENT_SIZE 3
const int LED_SEGMENTS[SEGMENT_SIZE] = {30, 70, 13};
I would like to check if the sum of the literal values are equal to LED_SIZE:
30+70+13 = 113
I'm interested to do this at compile time, using a pre-processor directives.
If the sum is not correct it should not compile.
|
using a pre-processor directives
It is not possible to access arrays using preprocessor directives. LED_SEGMENTS[0] means literally LED_SEGMENTS[0] for preprocessor, there is no array access in preprocessor.
You could stay with it in preprocessor world, and then write a variadic overloaded argument macro to calculate the sum of the comma separated list:
#define LED_SIZE 113
#define SEGMENT_SIZE 3
#define LED_SEGMENTS 30, 70, 13
#define SUM_1(a) a
#define SUM_2(a, ...) a + SUM_1(__VA_ARGS__)
#define SUM_3(a, ...) a + SUM_2(__VA_ARGS__)
#define SUM_4(a, ...) a + SUM_3(__VA_ARGS__)
#define SUM_N(_4,_3,_2,_1,N,...) SUM##N
#define SUM_IN(...) SUM_N(__VA_ARGS__,_4,_3,_2,_1)(__VA_ARGS__)
#define SUM(...) SUM_IN(__VA_ARGS__)
#if SUM(LED_SEGMENTS) != LED_SIZE
#error "SUM(LED_SEGMENTS) != LED_SIZE"
#endif
to do this at compile time
You should use:
#include <functional>
#include <numeric>
#define LED_SIZE 113
#define SEGMENT_SIZE 3
constexpr int LED_SEGMENTS[SEGMENT_SIZE] = {30, 70, 13};
static_assert(std::accumulate(std::begin(LED_SEGMENTS), std::end(LED_SEGMENTS), 0) == LED_SIZE);
|
74,226,424
| 74,226,646
|
Logic involved in solving for binary tree maximum sum
|
This problem comes from leetcode which can be found here. I was reading a solution to this problem which is below
/**
* Definition for a binary tree node.
* struct TreeNode {
* int val;
* TreeNode *left;
* TreeNode *right;
* TreeNode() : val(0), left(nullptr), right(nullptr) {}
* TreeNode(int x) : val(x), left(nullptr), right(nullptr) {}
* TreeNode(int x, TreeNode *left, TreeNode *right) : val(x), left(left), right(right) {}
* };
*/
class Solution {
public:
int maxPathSum(TreeNode* root) {
int res = INT_MIN;
dfs(root, res);
return res;
}
int dfs(TreeNode* root, int& res)
{
if(!root)
return 0;
auto l = max(0, dfs(root->left, res)), r = max(0, dfs(root->right, res));
res = max(res, l + r + root->val);
return max(l, r) + root->val;
}
};
I almost understand the logic of this problem but I do not quite understand why we have 0 in the max for the search part in the left and right subtrees. Can someone explain that to me?
|
The expression l + r + root->val considers what would be the optimal path for which the highest participating node is the current node.
Consider that it could be possible that this optimal path would not include nodes from the left subtree. In that case we want to make sure that l is 0. That is what this max(0, dfs(root->left, res)) expression accomplishes.
More concretely, if the optimal upward-path that ends in the node's left child turns out to be negative, then it is not optimal to extend that path further upwards with the current node's value. So then 0 is used instead, which practically means we don't use any value from the left subtree.
Obviously the reasoning for establishing the value for r is analogous.
|
74,226,582
| 74,232,784
|
Can I write a `concept` to accept a specific template class only?
|
I have a templated class:
template<Vector T>
struct diagonal_matrix;
Now, I want to create a concept DiagonalMatrix for all of templated class versions. So:
DiagonalMatrix<diagonal_matrix<std::vector<double>>> == true
DiagonalMatrix<diagonal_matrix<std::array<float, 4>>> == true
DiagonalMatrix<diagonal_matrix<std::span<int, 10>>> == true
DiagonalMatrix<diagonal_matrix<my::random_access_container_view<unsigned int>>> == true
DiagonalMatrix<other_matrix> == false
Keep in mind that diagonal_matrix has a subset of other_matrix functionality.
Is there a way to create a concept DiagonalMatrix which only accepts struct diagonal_matrix<?> without tagging the struct with a static constexpr bool I_am_a_diagonal_matrix = true;?. I don't care for implementations other than struct diagonal_matrix<?>.
|
If diagonal_matrix has a member aliases for all it's template parameters, you can substitute them into diagonal_matrix<>, then check you get what you started with.
template <typename M>
concept DiagonalMatrix = std::same_as<M, diagonal_matrix<typename M::vector_type>>;
|
74,226,681
| 74,227,141
|
Why does MSVC say a call to a virtual constexpr functional call operator does not result in a constant expression?
|
I have a class that wraps an array. It inherits from an abstract base class defining one virtual constexpr method for the function-call operator. In the child class, I override said method and access the internal array:
#include <cstddef>
#include <array>
#include <initializer_list>
template <typename T, std::size_t N>
class ContainerBase {
public:
virtual constexpr const T& operator()(std::size_t i) const = 0;
};
template <typename T, std::size_t N>
class Container : public ContainerBase<T, N> {
public:
constexpr Container(std::initializer_list<T> data) {
std::copy(data.begin(), data.end(), _items.begin());
}
constexpr const T& operator()(std::size_t i) const override {
return _items[i];
}
private:
std::array<T, N> _items;
};
int main () {
constexpr Container<int, 3> C = {2, -91, 7};
constexpr int F = C(1);
static_assert(F == -91);
}
Here is the godbolt link.
To the best of my understanding, this is all legal code in C++20, which allows virtual constexpr. G++ 10.3 and Clang 12 both accept this as valid code, however MSVC 19.33 does not accept it, claiming that variable F is not a constant expression:
msvc_buggy_constexpr.cpp(29,21): error C2131: expression did not evaluate to a constant
msvc_buggy_constexpr.cpp(21,16): message : a non-constant (sub-)expression was encountered
msvc_buggy_constexpr.cpp(31,5): error C2131: expression did not evaluate to a constant
msvc_buggy_constexpr.cpp(21,16): message : a non-constant (sub-)expression was encountered
What gives? This looks like a compiler bug in MSVC to me. I would investigate further but MSVC on godbolt.org seems to be down right now.
I should add, the issue only presents itself when the function call operator method is virtual —when it is not, the issue does not occur.
Can anyone advise?
|
User @Barry agrees with me that it's definitely a bug in MSVC.
I've submitted this bug report.
Hopefully the issue gets resolved soon.
Thanks all for your comments and further insights, it's very helpful!
|
74,226,690
| 74,233,108
|
TensorRT finding boundin box data after inference
|
I'm trying to use TensorRT for inference using my trained YOLOv5 model.
The model has been converted to an .engine file, which I have no problem loading and running the inference with. My problem is accessing the data.
What I basically end up getting as output is a 1x25200x85 tensor, which I have no way to process.
So far I have been able to copy the data to the CPU, and tried accessing it as follows:
void postprocessAndDisplay(cv::Mat &img, float *gpu_output, const Dims dims, float treshold){
// Copy to CPU
size_t dimsSize = accumulate(dims.d+1, dims.d+dims.nbDims, 1, multiplies<size_t>());
vector<float> cpu_output (dimsSize);
cudaMemcpy(cpu_output.data(), gpu_output, cpu_output.size()*sizeof(float), cudaMemcpyDeviceToHost);
vector<int> classIds, indices;
vector<cv::Rect> boxes, boxesNMS;
vector<float> confidences;
int img_width = img.cols;
int img_height = img.rows;
int n_boxes = dims.d[1], n_classes = dims.d[2];
// printf("Image size: %i x %i, n_boxes: %i, n_classes: %i\n", img_width, img_height, n_boxes, n_classes);
for (int i = 0; i < n_boxes; i++){
uint32_t maxClass = 0;
float maxScore = -1000.0f;
for (int j = 1; j < n_classes; j++){ // Starte paa 1 sia 0 er himmelen???
float score = cpu_output[i * n_classes + j];
// printf("Confidence found %f\n", score);
if (score < treshold)continue;
if (score > maxScore){
maxScore = score;
maxClass = j;
}
}
// printf("Max score for %i, class %i: %f\n", i, maxClass , maxScore);
if (maxScore > treshold){
float left_raw = (cpu_output[4*i]);
float top_raw = (cpu_output[4*i + 1]);
float right_raw = (cpu_output[4*i + 2]);
float bottom_raw = (cpu_output[4*i + 3]);
// int width = right - left + 1;
// int height = bottom - top + 1;
//
// cv::rectangle(img, cv::Rect(left, top, width, height), cv::Scalar(255, 0, 0), 1);
// printf("Drawing rectangle at: %f %f %f %f\n", left_raw, top_raw, right_raw, bottom_raw);
//printf("Found class %i\n", maxClass);
}
}
cv::resize(img, img, cv::Size(1000, 1000));
// cv::imshow("Test", img);
// cv::waitKey(0);
}
However, it seems like the trying to find the confidence score with cpu_output[i * n_classes + j] doesn't work, as sometimes the confidence is over 600. When trying to find the bbox-data using cpu_output[4*i], I just get a lot of data equaling to basically 0. Here's the one code similar example I was being able to find, however it doesn't use the YOLo network: https://visp-doc.inria.fr/doxygen/visp-3.5.0/tutorial-detection-tensorrt.html
Another weird thing is the network being 1x25200x85, while me having just 80 classes, which hints me to the 85 being something else.
Any ideas?
|
The output of the NN describes 25200 boxes with 85 numbers.
Each box represents a unique detection with its bounding rectangle and confidences for each coco class. There are potentially up to 25200 boxes (since the NN must have a static sized output) but in practise it only finds a handful of detections for each image.
The first 5 numbers are:
x (topleft)
y (topleft)
width
height
objectness (score for the max class)
The rest, 80 numbers are scores for invidual coco classes. You can find the semantic meaning for those classes here and here.
The x, y, width, height are pixel values between 0...image size (0...640 in the model I use).
So after each inference you have potentially 25200 matches for 80 different classes. But probably you will have less. You will use the objectness to filter out matches that have too low confidence. Then you have to check the maximum value of the 80 class scores. That way you can find the probabilities for each class for that particular bounding rectangle.
In terms of your code, cpu_output should have 25200 rows that have 85 floats each. You need to loop through all the rows like this:
const int rowSize = 85;
const int nClasses = 80;
for (int rowIndex = 0; rowIndex < 25200; ++rowIndex)
{
float* rowBeginPtr = cpu_output[rowIndex * rowSize];
const float x = rowBeginPtr[0];
const float y = rowBeginPtr[1];
const float w = rowBeginPtr[2];
const float h = rowBeginPtr[3];
const float score = rowBeginPtr[4];
if (score < scoreThreshold)
{
continue;
}
// Then read indices 5...79 in rowBeginPtr and find max class score
float maxClassScore = 0.0;
int maxClassIndex = 0;
for (int i = 0; i < nClasses; ++i)
{
const float& v = rowBeginPtr[5 + i];
if (v > maxClassScore)
{
maxClassScore = v;
maxClassIndex = i;
}
}
const float score = objectness * maxClassScore;
if (score < scoreThreshold)
{
continue;
}
// TODO: return x, y, w, h, score
// or save them to your data structure
}
As you can see from the snipped above, you were reading the data from wrong indices as you forgot to add the offset of 85 (the row size) to your indexing.
I recommend you to check at this repository for inspiration. Especially this function. Good luck!
|
74,226,968
| 74,236,802
|
Breaking a WHILE Loop with a Function (+ a blocking Function)
|
I am working on a program using threads to treat demands of remote clients in C++. The server will wait for clients to connect, and launch a thread doing some things.
For the server to shut down, the user must do an external interrupt Crtl+C, and the code will handle the signal (using <csignal>) in order to shut everything properly.
The waiting itself is made with a while loop, the server waits for a connection, and launches the thread. It's during this loop that I want to handle the signal. Here it is:
std::signal(SIGINT, interruptHandler);
while(intStatus != 2){
ClientSocket client;
client.csock = accept(sock,reinterpret_cast<SOCKADDR*>(&(client.csin)),&(client.crecsize));
tArray.push_back(new pthread_t);
pthread_create(tArray.back(), NULL, session, &client);
}
std::cout<<"Waiting for Clients to exit..."<<std::endl;
for(pthread_t* thread : tArray)
pthread_join(*thread,NULL);
std::sig_atomic_t intStatus is a global variable edited by interruptHandler when it's called.
ClientSocket is a struct containing informations about the client:
csoc the socket of the client
csin the adress of the socket
crecsize gives the size of the data to send (I think)
I am using <winsock2.h> to connect between server and clients. sock is the socket of the server.
I am using <pthread.h> to manage threads. They are stored in std::list<pthread_t*> tArray.
The problem is : because accept() pauses the process, and of course the loop must end to see if Crtl+C has been sent, the server can only shut down when an additional client connects.
Is there a way for interruptHandler() to break the while and let the program go ahead with the for loop and next? Should I change the algorithm? On a last resort I know we can set up tags in Assembly for the program to jump to. Is there a way to do this in C++ too?
By the way, I am using <pthread.h> and <winsock2.h> because I have too (it's a student project). Thanks for helping.
|
Thanks to @Blindy, I digged up about select(2) and came up with something. I will try to explain it here instead of marking @Blindy's answer as solution, since it took me some time to make it work and I want to save people from the headaches I had.
Here is the new while loop, using select() :
#include <csignal>
#include <sys/time.h>
#include <errno.h>
#include <stdio.h>
#include <stdlib.h>
#include <winsock2.h>
#include <sys/ioctl.h> //On Unix (with some other includes)
...
//The Server will only shut down if CRTL+C is sent
std::signal(SIGINT, interruptHandler);
//Make the socket non-blocking
int success = 1;
int nonBlocking = 1;
//If success = -1 : Error in ioctlsocket(), if success = 1 : ioctlsocket() wasn't called
//Optional condition, for portability sake.
#if defined (WIN32)
//if on Windows
success = ioctlsocket(sock,FIONBIO,&nonBlocking);
#else
//if on Unix (or else)
success = ioctl(sock,FIONBIO,&nonBlocking);
#endif
if(success != 0) throw std::runtime_error("ioctlsocket() failed, code : "+std::to_string(success));
//Declaration of variables needed for select() :
fd_set readSet; //A set of Sockets used to call listen() - here there is only one
timeval inter; //The time interval during select() is waiting for a pending connection
int res; //Declaration of the variable storing select()'s return
while(intStatus != 2){
//Reseting readSet
FD_ZERO(&readSet);
FD_SET(sock, &readSet);
//Reseting inter, select() modifying it on Unix
inter = {0,50}; //The select() will wait 50ms before retrying
res = select(4097,&readSet,NULL,NULL,&inter);
if(res == -1){ //If select() did not finish
if(errno == EINTR) continue; //If it was because of CRTL+C
else throw std::runtime_error("Select() failed.");
}
if(res == 0) continue; //If no connections are pending
//If the new readSet modified by select() still has the socket in it
if(!FD_ISSET(sock,&readSet)) throw std::runtime_error("The socket is no longer in readfds");
ClientSocket client;
client.sock = accept(sock,reinterpret_cast<SOCKADDR*>(&(client.sin)),&(client.recsize));
tArray.push_back(new pthread_t);
pthread_create(tArray.back(), NULL, session, &client);
}
I hope it's somewhat clear and I didn't do mistakes. Tell me if something is wrong, I will edit it. Thanks again, Blindy !
|
74,227,764
| 74,232,294
|
MAP_FIXED_NOREPLACE not supported on Ubuntu 20
|
When I run this code I get "mmap: Operation not supported", according to mmap man
that's because one of the flags is invalid (validated by MAP_SHARED_VALIDATE). The "bad" flag is MAP_FIXED_NOREPLACE
#include <fcntl.h>
#include <errno.h>
#include <sys/mman.h>
#include <string.h>
#include <unistd.h>
int main(int argc, char** argv)
{
int fd_addr = open("test", O_CREAT | O_RDWR);
if (fd_addr == -1) {
std::cout << "open: " << strerror(errno) << "\n";
return EXIT_FAILURE;
}
if (ftruncate(fd_addr, 100) == -1) {
std::cout << "ftruncate: " << strerror(errno) << "\n";
return EXIT_FAILURE;
}
auto mem =mmap((void*)0x7f4b1618a000, 1, PROT_READ | PROT_WRITE, MAP_FIXED_NOREPLACE | MAP_SHARED_VALIDATE | MAP_LOCKED, fd_addr, 0);
if (mem == MAP_FAILED) {
std::cout << "mmap: " << strerror(errno) << "\n";
return EXIT_FAILURE;
}
}
Any sane ideas what that could be and how to figure out what the problem is? I use Ubuntu 20 (kernel 5.4.0-131-generic) and g++-11 (with glibc 2.31)
g++-11 (Ubuntu 11.1.0-1ubuntu1~20.04) 11.1.0
Copyright (C) 2021 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
ldd (Ubuntu GLIBC 2.31-0ubuntu9.9) 2.31
Copyright (C) 2020 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
Written by Roland McGrath and Ulrich Drepper.
Replacing MAP_FIXED_NOREPLACE with MAP_FIXED works just fine.
Compiling like that:
g++-11 -g0 -Ofast -DNDEBUG -Wall -Werror --std=c++2a -march=native -flto -fno-rtti main.cpp -pthread -lrt
|
The only way to answer your question is to look at the source.
Taking an excerpt from do_mmap:
switch (flags & MAP_TYPE) {
case MAP_SHARED:
/*
* Force use of MAP_SHARED_VALIDATE with non-legacy
* flags. E.g. MAP_SYNC is dangerous to use with
* MAP_SHARED as you don't know which consistency model
* you will get. We silently ignore unsupported flags
* with MAP_SHARED to preserve backward compatibility.
*/
flags &= LEGACY_MAP_MASK;
fallthrough;
case MAP_SHARED_VALIDATE:
if (flags & ~flags_mask)
return -EOPNOTSUPP;
and comparing your flags against those in LEGACY_MAP_MASK,
it becomes evident that MAP_FIXED_NOREPLACE is not part of LEGACY_MAP_MASK, which returns the "operation not supported" error you report.
In this case, the MAP_FIXED_NOREPLACE bits result in an extra check up front and otherwise behaves as MAP_FIXED.
Long story short: in this situation you can replace MAP_SHARED_VALIDATE with MAP_SHARED without loss of functionality.
Doing what you suggested (MAP_FIXED_NOREPLACE to MAP_FIXED) has the potential of accidentally creating overlapping memory mappings.
|
74,228,635
| 74,228,692
|
How to use value of a class enum : char as a char argument of function?
|
In an endeavor to write cleaner code I'm (likely over-)using enums, like so:
enum class SoundFileCode : char {
PC_STARTUP = '1', // probably never used
LOADING_READY = '2',
VOICE_RECOGNITION_ON = '3',
VOICE_RECOGNITION_OFF = '4',
DISPENSING = '5'
};
I'd like to be able to pass that value to a function in a nice clean way (preferably without having to cast anything to make it a cleaner API):
sendRemoteAudioMessage(SoundFileCode::LOADING_READY);
Here's the function signature:
void sendRemoteAudioMessage(char audioCode){ }
I see you shaking your head. You already know the error I'm going to get: Compilation error: cannot convert 'SoundFileCode' to 'char' for argument '1' to 'void sendRemoteAudioMessage(char)'
My goals are:
To make it easy for code completion to suggest these hard-coded constant values
To be able to reuse the same names, but keep them namespaced cleanly within their own enum
To make it easy for a programmer to just select a code and pass it to the function without thinking too hard about it.
If enums won't foot the bill here, what's a good approach? Creating a class with some static constants? (this is likely how I'd approach it in PHP / Java)
UPDATE
The following (if somewhat verbose) approach compiles and runs fine. Is there a superior approach?
class SoundFileCode {
public :
static const char PC_STARTUP = '1';
static const char LOADING_READY = '2';
static const char VOICE_RECOGNITION_ON = '3';
static const char VOICE_RECOGNITION_OFF = '4';
static const char DISPENSING = '5';
};
|
enum class purposefully disallows implicit casts to the underlying type, so you'd either need static_cast<>, or - preferably - take the enum in the callee.
If neither of the above works, e.g. because the functions you call are in a library, you can still do the old trick of wrapping the enum into a namespace (or class):
namespace SoundFileCode {
enum type : char {
PC_STARTUP = '1',
// ...
}
}
In this case, you'll still have SoundFileCode::PC_STARTUP for the name of the enum value, but you can implicitly cast to char. An inconvenience is that now you need SoundFileCode::type in declarations of the enum (and other places where the type is used):
int main() {
SoundFileCode::type sfc = SoundFileCode::PC_STARTUP;
sendRemoteAudioMessage(sfc);
sendRemoteAudioMessage(SoundFileCode::PC_STARTUP);
}
|
74,229,538
| 74,230,025
|
Why does the cast operator attempt to call the constructor first?
|
I have an object that stores some state in a member of type S. Upon request of the view method, it returns a view of it of the type T, obtaining it by casting.
template<typename S, typename T>
struct Viewer {
S m_state;
Viewer(const S& state): m_state(state) {}
auto view() {
// return m_state.operator T();
return static_cast<T>(m_state);
}
};
int main() {
Viewer<double, double> v(1.);
std::cout << v.view() << std::endl;
}
Let's suppose that the object is made like this:
#include <tuple>
struct State: std::tuple<int, int> {
template<typename T>
State(const T& t) { std::get<0>(*this) = std::get<1>(*this) = t; }
template<typename T>
State(const T& t1, const T& t2) {
std::get<0>(*this) = t1;
std::get<1>(*this) = t2;
}
};
std::ostream& operator<<(std::ostream& os, const State& s) {
return os << std::get<0>(s) << " " << std::get<1>(s);
}
struct Wrapper {
State m_state;
Wrapper(const State& initial_state): m_state(initial_state) {}
operator State() const { return m_state; } // Conversion operator
};
int main() {
State s(0, 1);
Wrapper w(s);
Viewer<Wrapper, State> v(w);
std::cout << v.view() << std::endl; // <<< Error
}
By compiling this code now I get the error
error: cannot convert ‘const Wrapper’ to ‘std::__tuple_element_t<1, std::tuple<int, int> >’ {aka ‘int’} in assignment
State(const T& t) { std::get<0>(*this) = std::get<1>(*this) = t; }
Basically static_cast<T>(m_state) is trying to perform the cast by calling the constructor of the object of type T (with T=State), passing to it m_state (which is of type Wrapper) as the only argument. Since T has a template constructor that takes a single argument, it gets called. However the assignment fails because that constructor was not intended to work with an object of type Wrapper.
By dropping the constructor template<typename T> State(const T& t); from the State class the problem gets solved, because static_cast can no longer find a valid constructor to call. However, I need that constructor, and I cannot simply remove it.
If I explicitly call the cast operator in the Viewer::view method, the code runs again
return m_state.operator T();
However, it no longer works for a Viewer<double, double> object, because double does not have the member operator double
error: request for member ‘operator double’ in ‘((Viewer<double, double>*)this)->Viewer<double, double>::m_state’, which is of non-class type ‘double’
And I also need the Viewer class to work by instantiating it with <double, double>.
I don't want to specialize the State class constructor, because it should be unaware of the Wrapper class: what if a second wrapper class was created? Would it be necessary to specialize the state constructor further? Also, I prefer not to have specializations for the Viewer class. Again, what would happen if you created multiple wrapper classes? Or if you used it with an object without an explicit conversion operator?
I think the best solution is that the casting operation does not try to call the constructor when a suitable operator is available, however I have no idea how to do that. How can I solve this problem?
|
You don't have to specialize the State constructor, but do something else to SFINEA out the constructor template so that the default copy constructor is called in this case.
struct State: std::tuple<int, int> {
template<typename T,
typename = std::enable_if_t<std::is_convertible_v<T, int>>>
State(const T& t) {
std::get<0>(*this) = std::get<1>(*this) = t;
}
};
Demo
Without the std::enable_if part, the constructor template is instantiated with T=Wrapper when resolution begins. State(const Wrapper&) is added to the candidate set as a non-template function and is preferred over the predefined copy or move constructor that you expect.
|
74,229,580
| 74,230,178
|
maya api save/keep data in memory generated by command?
|
I wrote a maya plugin (class).
Executing the plugin/command in maya will produce some data. Executing the plugin/command again in Maya will depend on the data generated after the last execution, but the data generated in the last execution will be destroyed with the destruction of the plugin class.How do I save/keep the data generated after a command is executed in memory?
Perhaps create custom nodes in the scene?Is there a better and more convenient way
|
The only storable database in Maya is the node graph, and the attributes on the nodes. If you have written a command that is generating data in the DG, you probably should have written that as a node (where the input attributes are your function arguments, and the output attributes are the function outputs)
Typically you write an MPxCommand paired with an MPxNode. The command simply creates an instance of the node. When the -query flag is provided, you are returning data from the node attributes. When the -edit flag is present, you are modifying attributes on the node.
There is literally no way in Maya for an MPxCommand to store data other than by using the node graph (possibly optionVars to store user preferences, but that won't be tied to scene data).
Ignoring the fact that you probably want to be writing a node for now....
Either....
create an instance of a node type that already exists (e.g. transform or set), and add custom attributes to store the data you need. Relationships to other nodes in the scene can be stored using connections (sets may work well if you need to refer to many objects). The data will now be serialised into the Maya binary/ascii file so your tool will still work once the scene is reloaded.
Create a custom MPxNode, and do that same thing, but with a hardcoded set of attributes. This just means users can't delete the attributes, and you can get a bit more control over them (such as making them readonly, and possibly hoisting any computation out of your command, and moving it into the node's compute method).
If the data you need to store doesn't fit into the existing attribute types, you may need to add an MPxData object (which can be stored on an attribute within a custom node). This is useful for large binary blobs of data.
|
74,229,614
| 74,229,632
|
Check if string contains Enum Value C++
|
I am trying to solve an assignment and I searched in every possible way for an answer, but managed to only get "how to transform an enum into a string"
I have a string called type that can only contain "webApp", "MobileApp" and "DesktopApp". I have an enum that looks like this:
enum applicationType { webApp = 5, MobileApp = 10, DesktopApp = 15 };
How can I print the value from the enum corresponding to the string using a function?
applicationType print_enum(string tip) {
//if tip contains "webApp" i should print 5
}
How do I make the function test if the value of the string corresponds to a value in the enum, and if it does print that value?
|
You simply can't.
The names of c++ objects, types and enum entries are not available as strings. They're effectively just placeholders, identifiers for things the compiler needs to identify. It discards these identifiers pretty early on the way from source code to machine code.
(As of c++20. Who knows what the future brings.)
If you need to do that, it's necessary to replicate your enum, e.g. as std::unordered_map<applicationType, std::string>.
If you just need to look up a string based on an integer, just use int instead of an enum type as key in the map, and forget about the enum.
|
74,229,738
| 74,229,886
|
Running a C++ Script In Visual Studio 2022 keeps deleting text within a file, not sure what's wrong with the code
|
I've been stuck working on this painful program for a few hours now in Visual Studio 2022, and am struggling to wrap my head around why it keeps deleting text in a different file each time I run it.
For reference: For a C++ related assignment I have, I have to design a program that opens a file (age.txt in this instance), and displays the data within it (which should be 25).
Here's the code I have written, for reference:
#include<iostream>
#include<string>
#include<fstream>
using namespace std;
int main()
{
ifstream din;
ofstream dout;
string inpFileName;
string age; // The user's age
// Prompt for the age:
cout << "Please provide the file containing your age" << endl << endl;
cin >> inpFileName;
din.open(inpFileName.c_str()); // File extension is arbitrary
dout.open("age.txt"); //
if (din)
{
// Read the age:
getline(din, age);
din >> age;
// Output the age
cout << "Your age is: " << age << " years." << endl;
}
else
cout << inpFileName << " was not found. " << endl;
din.close();
dout.close();
system("pause");
return 0;
}
And oddly enough, it seems to work fine for the most part, as it prompts me to enter a file, where I insert age.txt. However, each time I try to insert the file in there, this is the output I get:
Please provide the file containing your age
age.txt
Your age is: years.
Press any key to continue . . .
Where it appears that, each time I run the program, it deletes the number "25" from age.txt, and proceeds to give me a blank result instead.
Would anybody know what's causing this, and if possible, what's a solution that I could use to stop my C++ program from deleting the text in my attached file?
Here, some things I've tried to do in order to fix the problem has included:
-Reworking if/else statements (didn't work)
-Reworking cout/dout statements (didn't work)
-Removing and adjusting file lines in both my script and in age.txt (didn't work)
-Adjusting the properties of the strings (didn't seem to work? Didn't get the intended output).
-Playing with #include statements whilst trying to get results (No luck there either)
-Adjusting administrative settings on my computer to see if it was blocking anything (no luck)
Among other attempted fixes, with no luck whatsoever. I feel completely lost on this. (For reference, I'm using Visual Studio 2022 for this script)
|
First, a mini-code review:
#include<iostream> // Poor spacing
#include<string>
#include<fstream>
using namespace std; // Bad practice
int main() // I'm noticing that a lot of your lines all have an extra space at the end
{
ifstream din; // Don't front-load your declarations; declare when you need them.
ofstream dout; // As described, you don't need this at all.
string inpFileName;
string age; // The user's age // Isn't an age an integer?
// Prompt for the age:
cout << "Please provide the file containing your age" << endl << endl;
cin >> inpFileName;
// Conversion to a C-string has not been necessary for about a decade.
din.open(inpFileName.c_str()); // File extension is arbitrary
dout.open("age.txt"); //
// Better to check for the error up front.
if (din)
{
// Read the age:
getline(din, age);
din >> age;
// Output the age
cout << "Your age is: " << age << " years." << endl;
}
else // Poor formatting
cout << inpFileName << " was not found. " << endl;
din.close();
dout.close();
system("pause"); // Lots of garbage whitespace on this line.
return 0;
}
The biggest issue is the extra output file stream. By default, an output file stream wipes the file it opens. You don't even need one, based on the description provided. Getting rid of it would be the minimum thing needed to fix your code.
And, while I fully understand this will likely be beyond what you've learned, passing the file name as an argument to main() would be far better.
#include <fstream>
#include <iostream>
int main(int argc, char** argv) {
if (argc != 2) {
std::cerr << "Need a file name.\n";
return 1;
}
std::ifstream din(argv[1]);
if (!din) {
std::cerr << "Error opening file.\n";
return 2;
}
int age;
din >> age;
std::cout << "Your age is: " << age << '\n';
din.close(); // Technically not necessary; the OS will close the file since
// the program is ending anyway, but consider it a best
// practice to handle your business yourself. You'll learn
// ways to automate this later on.
}
Output:
❯ ./a.out age.txt
Your age is: 25
|
74,230,074
| 74,230,238
|
How do I correctly use 3D Perlin Noise as turbulence for my particle system?
|
So I am working on a particle system, mainly as a learning exercise on the CPU, using Visual Studio C++. It's looking pretty neat!
The latest thing I'm attempting is to add turbulence using 3D perlin noise. I found this fellow's code: https://blog.kazade.co.uk/2014/05/a-public-domain-c11-1d2d3d-perlin-noise.html
I implemented it correctly. I know this, because I can draw solid, working Perlin noise within my app, and it draws it correctly, also taking into account octaves, amplitude and frequency, which I added access to. So far, so good.
The problem is that I don't know how to use this correctly for displacing particle motion. This is currently my implementation (px0, py0, pz0 are my particle positions in -1.0 to 1.0 screen-space range. 0.1 is just to scale values down to a usable amount):
//Initialize octaves, seed, amplitude, frequency
noise::PerlinOctave perlin(octaves, seed, amplitude, frequency);
// Call the noise function
float n = perlin.noise(px0,py0,pz0) * 0.1;
px0 += n;
py0 += n;
pz0 += n;
This produces ok results but when I adjust the amplitude, my particles move diagonally. This is usually due to using addition which makes me think perhaps I shouldn't be adding noise to the particle positions but rather multiplying. However, I haven't had any success trying that.
I also tried unsuccessfully assigning 1D Perlin noise to every axis like this:
px0 += perlin.noise(px0) * 0.1;
py0 += perlin.noise(py0) * 0.1;
pz0 += perlin.noise(pz0) * 0.1;
The noise function returns a value of -1.0 to 1.0 (I believe) and my particle screen-space also works that same range so there should be no need to remap the noise to 0.0-1.0.
So I can't think of anything else to try. Any ideas where I'm going wrong?
Thank you all in advance!
-Richard
|
I think you may have a few things backwards here.
3D Perlin noise defines a noise function that takes a 3D input, to provide you with a 1D output. What you need is something that takes a 3D input, and gives you a 3D output.
You could achieve this by having 3x3D noises....
noise::PerlinOctave3D perlinX(octaves, seedX, amplitudeX, frequencyX);
noise::PerlinOctave3D perlinY(octaves, seedY, amplitudeY, frequencyY);
noise::PerlinOctave3D perlinZ(octaves, seedZ, amplitudeZ, frequencyZ);
// Call the noise function(s)
float nx = perlinX.noise(px0,py0,pz0) * 0.1;
float ny = perlinY.noise(px0,py0,pz0) * 0.1;
float nz = perlinZ.noise(px0,py0,pz0) * 0.1;
px0 += nx;
py0 += ny;
pz0 += nz;
Using 3x1D noise funcs would also work (maybe not that well, but it will work)
noise::PerlinOctave1D perlinX(octaves, seedX, amplitudeX, frequencyX);
noise::PerlinOctave1D perlinY(octaves, seedY, amplitudeY, frequencyY);
noise::PerlinOctave1D perlinZ(octaves, seedZ, amplitudeZ, frequencyZ);
// Call the noise function(s)
float nx = perlinX.noise(px0) * 0.1;
float ny = perlinY.noise(py0) * 0.1;
float nz = perlinZ.noise(pz0) * 0.1;
px0 += nx;
py0 += ny;
pz0 += nz;
Even using 2D funcs would work (and be a bit cheaper than 3x3D funcs).
noise::PerlinOctave2D perlinX(octaves, seedX, amplitudeX, frequencyX);
noise::PerlinOctave2D perlinY(octaves, seedY, amplitudeY, frequencyY);
noise::PerlinOctave2D perlinZ(octaves, seedZ, amplitudeZ, frequencyZ);
// Call the noise function(s)
float nx = perlinX.noise(px0, py0) * 0.1;
float ny = perlinY.noise(py0, pz0) * 0.1;
float nz = perlinZ.noise(pz0, px0) * 0.1;
px0 += nx;
py0 += ny;
pz0 += nz;
If you are modelling turbulence though, you probably want to be extracting the noise to be used as a force to apply to the particles, rather than simply using the result to directly modify the position.
|
74,230,076
| 74,231,274
|
clang-tidy complains on std::string in structs
|
struct Thing {
std::string something;
};
Clang-tidy complains:
warning: an exception may be thrown in function 'Thing' which should not throw exceptions [bugprone-exception-escape]
I know about the bugprone-exception-escape.FunctionsThatShouldNotThrow setting, but I could not figure out what to put there to suppress that warning for all structures that contain strings.
Any suggestions?
|
It is a bug in the check, see https://github.com/llvm/llvm-project/issues/54668, apparently introduced with LLVM 14. Going by the issue description there isn't really much you can do except suppressing it individually for each class.
If that is too much work, you might want to consider disabling the check until they have fixed this. According to the linked issue a fix is already under review: https://reviews.llvm.org/D134588
|
74,230,847
| 74,231,184
|
C++ passing family of objects to class
|
I am trying to find a way to create a generic "car" class, with children that overload the parent's methods. Subsequently, I would like to have a user class that has as a member any class in the "car" family. Is there a way to achieve the desired functionality? Thank you!
The pseudocode below shows what my intial attempt was, but the compiler obviously complains that User wants a Car object, not a Toyota.
class Car
{
public:
Car();
void method1();
};
class Toyota : public Car
{
public:
Toyota();
void method1();
};
class Lada : public Car
{
public:
Lada();
void method1();
};
class User
{
public:
User();
Car car;
};
int main()
{
User user;
Toyota toyota;
user.car = toyota;
}
|
i think you should use virtual functions inside car and a reference pointer on assigning class toyota to car, the code i written below is working good.
#include <iostream>
using std::cout;
class Car
{
public:
Car(){cout << "A car";};
virtual void method1(){cout << "from car";};
};
class Toyota : public Car
{
public:
Toyota(){cout << "A toyota";};
void method1() {cout << "from toyota";};
};
class Lada : public Car
{
public:
Lada(){cout << "A Lada";};
void method1() {cout << "from lada";};
};
class User
{
public:
User(){cout << "A user";};
Car *car;
};
int main()
{
User *user = new User();
Toyota toyota;
user->car = &toyota;
delete user;
/***
if you dont need pointers then
User user;
Toyota toyota;
user.car = &toyota;
/**
}
|
74,231,073
| 74,231,198
|
Is there a function that gives you index of last char substring in string?
|
The find() function gives you index of first char of substring in string, I need last char.
I tried to get length of substring and sum it to first index but it is going out of bound.
// if( str2.substr(last char index, str2.find(part3)))
int sizeOfPart2 = part2.length();
int sizeOfPart3 = part3.length();
if(sizeOfPart2 == 1){
sizeOfPart2 = 0;
}
else if(sizeOfPart3 == 1){
sizeOfPart3 = 0;
}
cout<<str2.substr(str2[str2.find(part2) + sizeOfPart2],
str2.find(part3));
|
You can do something like:
const std::string path = "repeatedstrieang";
std::string mysubstr = "ea";
auto firstCharPos = path.find("ea");
if(firstCharPos!=std::string::npos)
{
std::cout << firstCharPos + mysubstr.size() -1; //-1 because indexing starts from `0`
}
|
74,231,171
| 74,231,480
|
How to return compile-constant string literal with C++ constexpr function
|
I have a constexpr function and I'm trying to strip the file name from the __FILE__ macro, that is, remove everything but the path. I sketched up this basic function to do so, and I made it constexpr in hopes that the compiler can deduce the result and just place that calculated result as a string in the final binary. The function isn't perfect, just a simple mock-up.
constexpr const char* const get_filename()
{
auto file{ __FILE__ };
auto count{ sizeof(__FILE__) - 2 };
while (file[count - 1] != '\\')
--count;
return &file[count];
}
int main()
{
std::cout << get_filename() << std::endl;
return 0;
}
The problem is that this is not being evaluated at compile time (build: MSVC x64 Release Maximum Speed optimization). I'm assuming this is because of returning a pointer to something inside a constant string in the binary, which is essentially what the function is doing. However, what I want the compiler to do is parse the get_filename function and somehow return the string literal "main.cpp", for example, instead of returning a pointer of that substring. Essentially, I want this to compile down so that the final binary just has main.cpp in it, and nothing else part of the __FILE__ macro. Is this possible?
|
Because you don't want the full __FILE__ path in the final binary, we must copy the string to a std::array:
constexpr auto get_filename()
{
constexpr std::string_view filePath = __FILE__;
constexpr auto count = filePath.rfind("\\");
static_assert(count != std::string::npos);
std::array<char, count> fileName{};
std::copy(filePath.data() + count + 1, filePath.data() + filePath.size(), fileName.data());
return fileName;
}
And specify constexpr when calling the get_filename function:
constexpr auto fileName = get_filename();
std::cout << fileName.data();
Alternatively, since C++20, you could use consteval to force it to be evaluated at compile time:
consteval auto get_filename();
Here's the test on godbolt, it uses printf instead of std::cout for a shorter asm.
|
74,231,905
| 74,241,864
|
How to convert Eigen Martix to Torch Tensor?
|
I'm new to C++.
I use Libtorch and Eigen and want to convert Eigen::Martix to Torch::Tensor, but I could not.
I wrote the code below refering to https://github.com/andrewssobral/dtt
#include <torch/torch.h>
#include <Eigen/Dense>
#include <iostream>
int main(){
Eigen::Matrix<double, Eigen::Dynamic, Eigen::Dynamic, Eigen::RowMajor> E(1,3);
E(0,0) = 1;
E(0,1) = 2;
E(0,2) = 3;
std::cout << "E" << std::endl;
std::cout << E << std::endl;
std::vector<int64_t> dims = {E.rows(), E.cols()};
torch::Tensor T = torch::from_blob(E.data(), dims).clone();
std::cout << "T" << std::endl;
std::cout << T << std::endl;
}
I run this code and get the output below.
E
1 2 3
T
0.0000 1.8750 0.0000
[ CPUFloatType{1,3} ]
But I expect output to be:
E
1 2 3
T
1 2 3
[ CPUFloatType{1,3} ]
|
By default, libtorch will imagine that the tensor you are creating is a float tensor (see at the end of the print : CPUFloatType). Since your eigen matrix is double, you want to change this behavior like this :
auto options = torch::TensorOptions().dtype(torch::kDouble);
torch::Tensor T = torch::from_blob(E.data(), dims, options).clone();
|
74,232,191
| 74,232,346
|
i made custom strsplit function but output is weird
|
#include <iostream>
using namespace std;
int ft_strlen(char *str, char ascii[])
{
int i = 0;
while (!ascii[(short)str[i]] && str[i])
++i;
return (i);
}
int word_count(char *s, char *ascii) {
int i = 0, cnt = 0;
while (s[i])
{
if (ascii[(short)s[i]])
while (ascii[(short)s[i]])
++i;
else {
++cnt;
while (!ascii[(short)s[i]] && s[i])
++i;
}
}
return (cnt);
}
char **ft_strsplit(char *s, char *ascii) {
if (!s)
return NULL;
int wc = word_count(s, ascii);
char **ans = (char **)malloc(wc);
char *result;
int i = 0, j = 0, k;
while (wc--) {
result = (char *) malloc(ft_strlen(s + i, ascii) + 1);
for (k = 0; s[i] != '\0'; ++i, ++k) {
if (!ascii[(short)s[i]])
result[k] = s[i];
else {
result[i] = '\0';
ans[j++] = result;
//inside the printf
printf("%d %s\n", j - 1, ans[j - 1]);
while (ascii[(short)s[i]])
++i;
break;
}
}
result[i] = 0;
ans[j] = result;
}
for (int i = 0; i < 5; ++i) {
printf("%d %s\n", i, ans[i]);
}
return ans;
}
int main()
{
char ascii[256] = {0,};
char d[] = ", ";
int i = -1;
while (d[++i])
ascii[(short)d[i]] = 1;
char str[90] = "it, is my, day 2,";
char** ptr = ft_strsplit(str, ascii);
for (int i = 0; i < 5; ++i) {
printf("%d %s\n", i, ptr[i]);
}
return 0;
}
I splited string and printing them, inside the printf printed well (plz look at the comment) but another printf's first string output weird.. I don't know why this happening. below is this code's output.
0 it
1 is
2 my
3 day
4 2
0 8P
1 is
2 my
3 day
4 2
0 8P
1 is
2 my
3 day
4 2
I expect all printf output like
0 it
1 is
2 my
3 day
4 2
I tried this code at https://cpp.sh/ this site. if this output depends on compiler plz tell me.
|
It's been a ghastly number of years since I've done C code, but from what I recall, this line:
char **ans = (char **)malloc(wc);
will only give you one byte for each word in the string, when you actually need a pointer's length. I believe what you actually want there is:
char **ans = (char **)calloc(wc, sizeof(char*));
Also, why are you bothering learning C?? Even if you ignore the object-oriented features of it, C++ is a much "Better C" (that's one of it's design goals), and give you things like bool.
|
74,232,265
| 74,232,658
|
std::format errors "no matching function" and "call to consteval function"
|
Platform: Win 10 64 bit
IDE: CLION 2022.2.4
Toolchain: VS 2022 Community Toolset v17.0 (CMAKE 3.23.2)
Build Tool ninja.exe
C++ compiler: cl.exe
#include <iostream>
#include <string>
#include <format>
int main() {
std::wstring test1 = L"Hällo, ";
std::wstring test2;
std::cout << std::format("Hello {}\n", "world!");
std::cout << std::format("Hello {}\n", "world!");
}
Errors shown in editor:
I suspect this is a CLang error, but I'm not quite sure. I can compile the code just fine and I get an output to the console. But why do I get an error here?
I tried to find anything on Google on it, but I didn't find anything on this specific error. I know from a friend of mine that this is not an isolated issue, or at least he has the same issue.
From what I read the "consteval" was somewhat newly introduced and might still be incompletely implemented in some library functions?
|
The compiler that your editor uses for highlighting and intellisense doesn't support those features yet.
std::format is a c++20 feature. According to this page it is not yet supported in Clang. This has been discussed here.
|
74,232,544
| 74,242,188
|
Make clang initializing an array by copy from constant when type has volatile copy-assignment operator
|
I have a class similar to
class A {
public:
constexpr A(int v) : v(v) { /* logic */};
A& operator=(const A&) = default;
// This is error: the parameter for an explicitly-defaulted copy assignment operator may not be volatile
// A& operator=(const volatile A&) = default;
// This disables initializing A by memcopy from a global constant.
A& operator=(const volatile A& a) {
v = a.v;
return *this;
}
int v;
};
void f() {
// 1st requirement: Fast constant initialization of big arrays
A a[20] = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19};
// 2nd requirement: Allow assignment from volatile
volatile A b(1);
a[0] = b;
}
(https://godbolt.org/z/fz5raT1n8)
and I would like to
a) both allow the code in f() to compile, especially the assignment from a volatile object.
This is because there are already users of the class that do this, and it's hard to upgrade them.
b) and make the initialization of big arrays of A efficient. When I remove the volatile operator=, then clang creates a constant global and a memcpy (https://godbolt.org/z/37j1zesjY), but I didn't find a way how to achieve this while still having requirement (a). It seems that clang only performs this initialization when the class is trivially copyable.
I tried defaulting the A& operator=(const volatile A& a) but that is not allowed by clang. I checked the clang source code (AggExprEmitter::EmitArrayInit in CGExprrAgg.cpp) to find that it checks whether the type is trivially copyable before doing this optimization.
Is there a way how I can write my code to achieve this?
If no, would it be a conforming implementation to change clang to perform the
the initialization via a constant global even when the class is not trivially copyable as long as the copy constructor that would be used is trivial?
Thanks!
|
This seems like a bug in Clang; while the programmer may only expect memcpy to be well-defined when used on trivially copyable types, the compiler is permitted to replace a series of simple stores to adjacent memory locations with a memcpy call when it can determine that the result will be the same, regardless of whether the type is trivially copyable or not. Based on my brief experiments on Godbolt, GCC is able to perform the optimization without any issues. I recommend filing a bug report with Clang.
A workaround is to make the second assignment operator a template that is constrained to only accept const volatile A&. Its templated nature prevents it from being considered a copy-assignment operator, which means A is still trivially copyable.
template <class Other,
std::enable_if_t<std::is_same<Other, volatile A>::value>* = nullptr>
A& operator=(const Other& a) {
v = a.v;
return *this;
}
You might also want to make it accept any Other that's the volatile version of any class derived from A, not just A itself. This is left as an exercise for the reader.
(There are still more cases where this template will not accept the same arguments as the corresponding non-template function, and in some cases there might be nothing that the class author can do about it. However, I expect that the number of call sites that have this issue will be low, so perhaps those can just be fixed.)
|
74,233,537
| 74,236,137
|
What is function with multiple variadic args?
|
I don't understand how this code works. Could anyone please enlighten me a bit. I was pretty much sure "the parameter pack should be the last argument"
void foo(auto&&...args1, auto&&... args2, auto&&... args3) {
std::cout << "args1:\n", ((std::cout << args1 << " "), ...);
std::cout << "args2:\n", ((std::cout << args2 << " "), ...);
std::cout << "args3:\n", ((std::cout << args3 << " "), ...);
}
int main(int argc, char** argv)
{
foo(1,2,3,4,5,6);
}
If it's allowed how can I split arg1, args2 and args3?
The compiler (g++-11) assumes all parameters pack except args3 are empty, so the output is
args1:
args2:
args3:
1 2 3 4 5 6
|
The program is ill-formed and gcc and clang are wrong in accepting the code. You can also confirm this by slightly modifying your code to as shown below. There is also an old gcc bug for this.
Basically, in case of function templates(in the modifed program shown below) multiple template parameter packs are permitted, as long as each template parameter following a template parameter pack has either a default value or it can be deduced. But since neither of these requirements is satisfied for T2 and T3(as they cannot be unambiguously deduced and cannot have default arguments), the program is ill-formed.
The exact same reasoning applies to your given example as foo(in your original example) is a generic function(aka abbreviated function template).
GCC and Clang shows the same incorrect behavior for the following modified program: Demo
template<typename... T1, typename... T2, typename... T3>
void foo(T1&&...args1, T2&&... args2, T3&&... args3) {
std::cout << "args1:\n", ((std::cout << args1 << " "), ...);
std::cout << "args2:\n", ((std::cout << args2 << " "), ...);
std::cout << "args3:\n", ((std::cout << args3 << " "), ...);
}
int main(int argc, char** argv)
{
foo(1,2,3,4,5,6); //gcc and clang compiles this while msvc correctly rejects this
}
Here is the gcc bug:
GCC accepts invalid program involving multiple template parameter packs
Here is the clang bug:
Clang accepts invalid program involving multiple template parameter packs
The same can also be seen from temp.param:
A template parameter pack of a function template shall not be followed by another template parameter unless that template parameter can be deduced from the parameter-type-list ([dcl.fct]) of the function template or has a default argument ([temp.deduct]).
// U can be neither deduced from the parameter-type-list nor specified
template<class... T, class... U> void f() { } // error
(emphasis mine)
|
74,233,806
| 74,234,298
|
Problem using boost::multi_index with composite key member functions
|
I have the following container:
using KeyValue = mutable_pair<Key, Value>;
using MyContainer = boost::multi_index_container<
KeyValue,
boost::multi_index::indexed_by<
boost::multi_index::hashed_unique<
boost::multi_index::tag<KeyValueTag>,
boost::multi_index::composite_key<
KeyValue,
boost::multi_index::const_mem_fun<KeyValue::first_type, unsigned int, &KeyValue::first_type::foo>,
boost::multi_index::const_mem_fun<KeyValue::first_type, unsigned int, &KeyValue::first_type::bar>,
boost::multi_index::const_mem_fun<KeyValue::first_type, unsigned int, &KeyValue::first_type::baz>,
>
>,
boost::multi_index::hashed_non_unique<
boost::multi_index::tag<BazTag>,
boost::multi_index::const_mem_fun<KeyValue::first_type, unsigned int, &KeyValue::first_type::baz>
>,
boost::multi_index::hashed_non_unique<
boost::multi_index::tag<BarTag>,
boost::multi_index::const_mem_fun<KeyValue::first_type, unsigned int, &KeyValue::first_type::bar>
>
>
>;
Where mutable_pair is the boost provided example for maps, and Key is a class that contains const member accessors for foo, bar and baz.
The code compiles fine, however when trying to query by any index ie:
MyContainer c;
const auto& byBaz = c.get<BazTag>();
const auto it = byBaz.find(11);
// or
const auto [beg, end] = byBaz.equal_range(11);
it complains of
<long instantiation template error>
in mem_fun.hpp:58:23: error: no match for ‘operator*’ (operand type is ‘const mutable_pair<Key, Value>’)
58 | return operator()(*x);
What am I missing? I've been struggling with this for hours :(
|
The code compiles fine
That's because templates members aren't instantiated unless you use them. You don't have valid indexes for your element type.
Your indexes are trying the equivalent of
KeyValue pair;
unsigned Key::(*pfoo)() = &Key::foo;
pair.*pfoo
Instead of
pair.first.*pfoo;
You need accessors for KeyValue, not Key
unsigned int getFoo(const KeyValue & pair) {
return pair.first.foo();
}
unsigned int getBar(const KeyValue & pair) {
return pair.first.bar();
}
unsigned int getBaz(const KeyValue & pair) {
return pair.first.baz();
}
using MyContainer = boost::multi_index_container<
KeyValue,
boost::multi_index::indexed_by<
boost::multi_index::hashed_unique<
boost::multi_index::tag<KeyValueTag>,
boost::multi_index::composite_key<
KeyValue,
boost::multi_index::global_fun<KeyValue, unsigned int, &getFoo>,
boost::multi_index::global_fun<KeyValue, unsigned int, &getBar>,
boost::multi_index::global_fun<KeyValue, unsigned int, &getBaz>,
>
>,
boost::multi_index::hashed_non_unique<
boost::multi_index::tag<BazTag>,
boost::multi_index::global_fun<KeyValue, unsigned int, &getBaz>
>,
boost::multi_index::hashed_non_unique<
boost::multi_index::tag<BarTag>,
boost::multi_index::global_fun<KeyValue, unsigned int, &getBar>
>
>
>;
Aside: If you have C++17 and boost 1.69 or later, you can use a much terser syntax for keys:
boost::multi_index::key<&getFoo, &getBar, &getBaz>
boost::multi_index::key<&getBar>
|
74,234,003
| 74,234,112
|
Can't assign value to vector in c++
|
I am trying to assign value to a vector but I keep getting different errors. I am using clang++ version 14.0.0 to build the file and I am getting the error using vs code debugger.
Here are the different erros:
When i run this code
#include <iostream>
#include <vector>
using namespace std;
int main() {
vector<int> v;
v = {0, 1};
return 0;
}
I get the error "expected expression".
When i run this code
#include <iostream>
#include <vector>
using namespace std;
int main() {
vector<int> v;
v = vector<int>{0, 1};
return 0;
}
I get error "expected '(' for function-style cast or type construction"
Neither initiliazing nor assigning the vector later seems to work.
If my problem is unclear. Please let me know how I can improve :)
|
clang++ on Mac defaults to using C++98 (-std=c++98) and in C++98 both errors are to be expected.
Just add
-std=c++11
(or later, like -std=c++14, -std=c++17 or -std=c++20) when compiling and both your snippets will compile fine.
|
74,234,635
| 74,235,105
|
C++ - Sort map based on values, if values same sort based on key
|
I came across a problem where I needed to store two values, one id and other its influence, and id should be randomly accessible. Also it should be sorted based on influence and if both influence are same , sort based on id. With these things in mind, I used map,but is there a way to actually do it ?
I tried below comparator and map but it gives error
struct cmp
{
bool comparator()(const pair<int,int>a,const pair<int,int>b)
{
if(a.second==b.second) return a.first<b.first;
else return a.second<b.second;
}
};
unordered_map<int,int,cmp>m;
|
From what I understand, you want a collection sorted by one value but quickly indexable by another. These two points are in contradiction. Sorting a collection by a key value makes it quicker to index by that key value. There is no easy way to make a collection quickly indexable in two different ways at the same time. Note that I say "quickly indexable" instead of talking about random access. That's yet a different concept.
In short, you need two collections. You can have a main one that stores influence-ID pairs and is sorted by influences, and a secondary one that stores IDs and maps them to the main collection. There are many options for that; here is one:
std::set<std::pair<int,int>> influences;
std::unordered_map<int, decltype(influences)::iterator> ids;
When inserting an influence-ID pair, you can insert into influence first and then take the iterator to that new element and insert it with the ID.
Another solution is to keep a main collection of influence-ID pairs but this time sorted by IDs (and binary search it when you need to get the influence from an ID), and maintain a array of permutation indices that tells you at what index an element of the main collection would be if it were sorted by influence. Something like this:
std::vector<std::pair<int,int>> influences;
// insert all elements sorted by ID
std::vector<decltype(influences)::size_type> sorted_indices;
// now sort by influences but modifying `sorted_indices` instead.
Relevant question
If the IDs are all from 0 to N, you can even just have influences indexed by ID:
std::vector<int> influences; // influences[id] is the influence value corresponding to id
This gives you random access.
The "correct" choice depends on the many other possible constraints you may have.
|
74,235,475
| 74,242,216
|
Why default assignment operator is not called from assignment-from-base-class operator?
|
Implementing a derived class from abstract base class with assignment operator by using dynamic cast in base-to-derived assignment operator, I'd like to call derived-to-derived assignment operator. This works.
#include <iostream>
using namespace std;
class base
{
public:
virtual base& operator = (const base& ) = 0;
};
class derived: public base
{
public:
derived (int d): data(d) {};
derived& operator = (const base& b)
{
if (&b == this) return *this;
const derived &d = dynamic_cast<const derived&> (b);
*this = d;
return *this;
}
derived& operator = (const derived& d)
{
if (&d == this) return *this;
data = d.data;
return *this;
}
private:
int data;
};
However when I do not implement derived& operator = (const derived& d) explicitly or use
derived& operator = (const derived& d) = default;
compilation fails, complaining 'undefined reference to base::operator=(base const&)' (i.e. trying to call abstract method). Why? Is there a way not to implement default assignment? This seems to be a redundant and may result in future errors, e.g. in case of adding a field into base/derived class without corresponding modifying the assignment operator?
|
If you want to force derived classes D to implement the function D::operator=(const base&) and you also want them to be able to have their own defaulted copy-assignment operators, D::operator=(const D&) = default;, then:
make base::operator=(const base&) pure (as you have already done), and
provide an out-of-line definition of base::operator=(const base&). This looks like: base& base::operator= (const base&) = default; The definition of base's copy-assignment operator is required since it will be called by the defaulted copy-assignment operator of each derived class.
It's surprising, but a function that is declared pure can still provided with a definition.
|
74,235,569
| 74,236,310
|
How to use actually use abstract classes/interface in Windows API?
|
I'm aware that there's no actually built-in concepts of interfaces in C++, so in order to implement it one must use abstract classes which only contains pure virtual functions.
Now, In Microsoft Windows' API list, some of the classes there like IPropertyStorage, IPropertyStorage, and IStorage are interfaces (denoted by the I at the start).
Because they are interfaces/abstract classes, they need to be sub-classed or inherited for me to actually use them. What confuses me is that each class have member methods which does certain things, so does that mean I need to override those methods?
Some of the methods:
Thinking that I need to sub-class the Interfaces, I tried the following code below but it seems that I'm wrong about the sub-classing:
#include <iostream>
#include <propidl.h>
#include <objidl.h>
class PropertyStorage : IPropertyStorage {};
int main(int argc, char* argv[]){
PropertyStorage ips(); <- function returning abstract class "PropertyStorage" is not allowed
}
Having said those things, I'd like to reiterate
My question is: How do you actually use the interfaces from Microsoft Windows' API?
|
How do you actually use the interfaces from Microsoft Windows' API?
That's easy: You acquire a pointer to an interface, and start using it. And when done, you Release() it. That's COM in a nutshell.
On to the harder question then: How do you actually get hold of a COM interface pointer? Essentially, there are two ways to do so:
Call a factory function that supplies a COM object through an interface pointer. The "standard" way is by calling CoCreateInstance. The less "standard" (albeit increasingly common) way is to call a dedicated factory function. To get an IPropertyStorage interface you can call StgCreatePropStg (or similar), for example.
This addresses the use case where you are consuming a COM interface that the system (or a library) provides for client use.
Implement the interface, and use whatever means you see fit to instantiate this concrete implementation. When using a COM-capable C++ compiler (such as MSVC) this amounts to providing an implementation for a class that derives from the interface.
This is useful for cases where you need to author an interface, generally to be consumed by someone else. Examples include IStream or IEnumString. This is not the case for the interfaces asked for in the question.
|
74,236,010
| 74,236,143
|
My code gives different results on different compilers
|
My code gives different results on different compilers, the following code gives 499999998352516354 when I enter 1,1000000000 as my input on vs code which is the desired results while it gives 499999998352516352 on codeforces
#include <bits/stdc++.h>
using namespace std;
int main()
{
cout<<fixed<<setprecision(90);
long long x,y;
long long sum=0;
long long z=1;
cin>>x;
for (int i = 0; i < x; i++)
{
cin>>y;
sum=y*(y+1)/2;
z= log2(y);
sum-=2*(pow(2,1+z)-1);
cout<<sum<<"\n";
sum=0;
}
}
|
Use std::llround() function around pow() and your code will work.
It is because pow() gives floating value which can be incorrectly truncated to 1 less than needed. And llround() gives correct rounding to whole integer.
Below is fixed code, I also adjusted code formatting and changed to correct necessary C++ headers.
Try it online!
#include <iostream>
#include <iomanip>
#include <cmath>
using namespace std;
int main() {
cout << fixed << setprecision(90);
long long x, y;
long long sum = 0;
long long z = 1;
cin >> x;
for (int i = 0; i < x; i++) {
cin >> y;
sum = y * (y + 1) / 2;
z = log2(y);
sum -= 2 * (llround(pow(2, 1 + z)) - 1);
cout << sum << "\n";
sum = 0;
}
}
Input:
1 1000000000
Output:
499999998352516354
As it is suggested in comments you may also use bit shifting 1LL << x instead of pow(2, x) if x is non-negative integer. And instead log2(y)
if y is integer then one can use std::bit_width(y) - 1 (read about std::bit_width)
Try it online!
#include <iostream>
#include <iomanip>
#include <cmath>
#include <bit>
using namespace std;
int main() {
cout << fixed << setprecision(90);
long long x, y;
long long sum = 0;
long long z = 1;
cin >> x;
for (int i = 0; i < x; i++) {
cin >> y;
sum = y * (y + 1) / 2;
z = std::bit_width<unsigned long long>(y) - 1;
sum -= 2 * ((1LL << (1 + z)) - 1);
cout << sum << "\n";
sum = 0;
}
}
|
74,236,018
| 74,236,933
|
Is possible to store only 1 color per triangle in OpenGL?
|
When I need to set color of some triangles, I need to define every vertex as follows:
{
float x;
float y;
float z;
float r;
float g;
float b;
float alpha;
}
But in this case, every vertex will have one color.
A triangle has 3 vertex, so it need 3 colors, and most of the time, every pixel of a single triangle has the same color.
But I need store 3 color for them, that's a waste of RAM, VRAM and GPU bandwidth.
How to storage only one color for each triangle?
|
Yes, this is possible by storing the colors separately from the positions. There are multiple ways of mapping the vertices to colors:
Using an SSBO. For the vertex, store only the x, y, z-coordinates. Store all colors in a Shader Storage Buffer Object and use gl_VertexID as an index into the SSBO: color = ssbo_colors[gl_VertexID / 3];.
The downside is that colors cannot be reused, e.g., if you want two triangles to be red, then the color red has to be stored twice.
Using an SSBO with an index per vertex. Use the same method as in 1, but also store a color index for each vertex. Then, reusing colors is possible: color = ssbo_colors[color_idx];
Using a texture. It is also possible to store the colors in a 1D texture, store a texture index with each vertex, and use texelFetch to retrieve the corresponding color. Perhaps this isn't exactly advisable, as textures come with limitations (e.g. size, limited number of textures) and you won't be using typical texture lookup features, such as interpolation and mirroring.
Methods 1 and 2 require OpenGL 4.3. Method 3 requires OpenGL 3.3.
Since you are not using interpolation over the faces of the triangles, you may want to determine the color in the vertex shader and pass it to the fragment shader using the flat interpolation qualifier, and use a provoking vertex.
If you further want to reduce the amount of memory used, look into storing colors as normalized integers.
|
74,236,048
| 74,236,162
|
Euler method in c++; Values getting too big too fast
|
i am trying to solve the equation of motion for a particle with mass m attached to a spring with a spring constant k. Both are set to 1 however.
The algorithm looks like this:
My (attempted) solution, written in c++, looks like this:
#include <iostream>
#include <iomanip>
#include <math.h>
#include <stdlib.h>
#include <fstream>
// Initialise file to write series of values in
std::ofstream output("Eulermethod.txt");
// Define Euler algorithm
void euler(double x_0, double v_0, double delta, double t_max) {
double x_prev = x_0;
double v_prev = v_0;
double x_new, v_new;
for (double t = 0; t < t_max; t = t + delta) {
x_new = x_prev + t * v_prev;
v_new = v_prev - t * x_prev;
// Writes time, position and velocity into a csv file
output << std::fixed << std::setprecision(3) << t << "," << x_prev << "," << v_prev << std::endl;
x_prev = x_new;
v_prev = v_new;
// Breaks loop if values get to big
if ((x_new != x_new) || (v_new != v_new) || (std::isinf(x_new) == true) || (std::isinf(v_new) == true)) {
break;
}
}
}
int main() {
// Initialize with user input
double x_0, v_0, t_max, delta;
std::cout << "Initial position x0?: ";
std::cin >> x_0;
std::cout << "Intial velocity v0?: ";
std::cin >> v_0;
std::cout << "Up to what time t_max?: ";
std::cin >> t_max;
std::cout << "Step size delta?: ";
std::cin >> delta;
// Runs the function
euler(x_0, v_0, delta, t_max);
}
I know that the solution will grow indefinitely but for smaller values of t it should resemble the analytical solution while growing slowly.
The values i get are blowing out of proportions after ca. 10 iterations and i can not find out why.
When i plot the position as a function of the time i get the plot below, which is obviously wrong.
|
Your equation implementation is wrong. You are usint t instead of dt. Correct variant:
x_new = x_prev + delta * v_prev;
v_new = v_prev - delta * x_prev;
And a side note if you plan to develop your code further: common approach to implementation of ODE solver is to have a method with signature similar to
Output = solveOde(System, y0, t);
Where System is method that describes the ODE dy/dx = f(x,t), e.g.
std::vector<double> yourSystem(std::vector<double> y, double /*t unused*/)
{
return {y[1], -y[0]};
}
y0 are initial conditions, and t is a time vector (delta is calculated internally). Take a look at boost odeint or more compact and transparent python documentation.
|
74,236,323
| 74,236,390
|
C++ multiple definition of function when moving to another file
|
I'm trying to refactor my C++ project moving some function from a cpp file (example.cpp) to another (utils.cpp) in order to sort the project and reuse the same functions in other sources.
example.cpp:
double std_dev(std::vector<double> Mean, std::vector<double> Mean2, int n,int i){
if (n==0){
return 0;
} else{
return sqrt((Mean2.at(n) - pow(Mean.at(n),2))/i);
}
}
float mean(const float* x, uint n){
float sum = 0;
for (int i = 0; i < n; i ++)
sum += x[i];
return sum / n;
}
So, when i delete the function from example.cpp just by cutting the code and pasting to the file utils.cpp, and including the file utils.cpp in example.cpp:
#include "utils.cpp"
//deleted definition
when i compile the project, the compiler fail giving the error: multiple definition of 'mean'.... multiple definition of 'std_dev', as if somehow the compilation I performed did not delete the function definitions in the example.cpp file.
I've also tried to:
delete the cmake-build-debug folder and reloading the project
compile the code without the functions definition (in order to have a clean compilation) and adding it in a later moment
what am I doing wrong?
|
A function can be declared several times (i.e. just the function prototype), and may not be defined several times (i.e. with body).
This is why .h files with declarations are used and can be included anywhere, and the implementation remains in a single .cpp.
The same holds for global variables: extern declaration in a header, definition in some source file.
|
74,236,714
| 74,239,114
|
How to get a clicked button id from QButtonGroup in qt 6.4 through signal and slot connection
|
i am new to qt and what to know how to get the id of the button that is clicked in qt through signal and slot.
connect(group, SIGNAL(buttonClicked(int)), this, SLOT(buttonWasClicked(int)));
This was the earlier syntax to get the id, but qt has declared buttonClicked(int) as obsolete and it no longer allows us use it. is there any new replacement for this code.
Plz forgive if this question was silly, but i don't know much about qt yet.
|
The QButtonGroup::buttonClicked(int) signal is obsoleted but you can still use QButtonGroup:: buttonClicked(QAbstractButton *). Perhaps use it in conjunction with a lambda and your existing buttonWasClicked slot...
connect(group, &QButtonGroup::buttonClicked,
[this, group](QAbstractButton *button)
{
buttonWasClicked(group->id(button));
});
Alternatively, use the idClicked signal as suggested by @chehrlic...
connect(group, &QButtonGroup::idClicked, this, &MainWindow::buttonWasClicked);
(Assuming MainWindow is the type pointed to by this.)
|
74,237,153
| 74,237,323
|
Process exited with return value 3221225725 C++
|
I've faced the stack overflow problem. As far as I got, the problem is in the size of a string array.
If I put 10^4 size it works fine, but if the size is increased to 10^5, it throws an error.
using namespace std;
string getRow(int dig) {
string row = "";
if (dig) {
row += getRow((dig - 1) / 26);
char c = 'A' + (dig - 1) % 26;
row += c;
}
return row;
}
int main()
{
int n,m;
string inp[100000], col, row, last;
smatch sm;
auto pattern = regex("(R)(\\d+)(C)(\\d+)");
auto pattern2 = regex("([A-Z]+)(\\d+)");
cin >> n;
for(int i=0; i < n; i++)
cin >> inp[i];
for(int i=0; i < n; i++) {
if(regex_match(inp[i], sm, pattern)) {
}else if(regex_match(inp[i], sm, pattern2)) {
}
}
return 0;
}
|
3221225725, or C00000FD in hex, is a code Windows uses when a stack overflow occurs. string inp[100000] creates 100000 strings on the stack, which is too large.
Here's how to program this correctly
vector<string> inp;
int n;
cin >> n;
inp.reserve(n); // reserve space for the strings
for(int i=0; i < n; i++)
{
string s;
cin >> s;
inp.push_back(s); // add one string
}
No limit on the number of strings. No stack overflow.
Declaring a zero-sized vector, reserving the correct size when it is known, and then adding items with push_back is the canonical way to handle a variable number of input items.
|
74,237,380
| 74,237,458
|
Are there cases in C++ where the auto keyword can't be replaced by an explicit type?
|
I came across the following code:
auto x = new int[10][10];
Which compiles and runs correctly but I can't figure out what would be the type for defining x separately from the assignment.
When debugging the type shown is int(*)[10] for x but int (*) x[10]; (or any other combination I tried) is illegal.
So are there cases where auto can't be replaced by an explicit type...? (and is this such a case?)
|
The type of x is int (*)[10]. There are different ways of figuring this out. The simplest is to just try assigning 5 to x and noticing what the error says:
error: invalid conversion from 'int' to 'int (*)[10]' [-fpermissive]
13 | x = 4;
| ^
| |
| int
Or just use static_assert:
static_assert(std::is_same<decltype(x), int(*)[10]>::value);
This means that if you want to explicitly create x(without using auto) then you can do it as:
int (*x)[10] = new int[10][10];
Are there cases in C++ where the auto keyword can't be replaced by an explicit type?
Now coming to the title, one example where auto cannot be directly replaced by explicitly writing a type is when dealing with unscoped unnamed enum as shown below: Demo
enum
{
a,b,c
}obj;
int main()
{
//--vvvv----------->must use auto or decltype(obj)
auto obj2 = obj;
}
Similarly, when dealing with unnamed class. Demo.
|
74,237,474
| 74,237,877
|
Is it needed to change font weight when using High DPI monitors for wxWebview in wxWidgets library?
|
I am working with wxWebview widget with both IE and Edge backend in Windows 10.
My understanding so far is that IE does not respect high DPI monitors and does not scale fonts respectively. So in IE backend, I must handle the DPI change event and update my font size with FromDPI().
I set the fonts in a style tag like below:
<style> body {font: normal 400 12px Segoe UI, system-ui;} </style>
But Edge does much better work and scales the font. My goal is to use Edge backend in production. I want to know whether it is even needed to handle DPI change event with this backend or it is handled internally by webview2 control? If yes, should I also change font-weight in high DPI monitors besides font-size? If yes, how? ( I think FromDPI does not work here )
wxversion: 3.2.1
OS: Windows 10
|
@Reza,
I suggest dropping IE backend.
IE support will retire in a couple of month and then it will be Edge only.
So as long as Edge behaves correctly - it will be OK.
|
74,237,521
| 74,237,751
|
C++ Convert Number to Text With Text
|
i want to numbers in the text entered by the user are converted into text and printed on the screen. Example:
cin>> My School Number is 5674
and i want to "my school number is five six seven four" output like this. I make only Convert to number to text but i cant put together text and numbers please help me
#include <iostream>
using namespace std;
void NumbertoCharacter(int n)
{
int rev = 0, r = 0;
while (n > 0) {
r = n % 10;
rev = rev * 10 + r;
n = n / 10;
}
while (rev > 0) {
r = rev % 10;
switch (r) {
case 1:
cout << "one ";
break;
case 2:
cout << "two ";
break;
case 3:
cout << "three ";
break;
case 4:
cout << "four ";
break;
case 5:
cout << "five ";
break;
case 6:
cout << "six ";
break;
case 7:
cout << "seven ";
break;
case 8:
cout << "eight ";
break;
case 9:
cout << "nine ";
break;
case 0:
cout << "zero ";
break;
default:
cout << "invalid ";
break;
}
rev = rev / 10;
}
}
int main()
{
int n;
cin >> n;
NumbertoCharacter(n);
return 0;
}
|
Here is a solution.
The key line is the int charToInt = a - '0';. Here is a link to more details on that technique.
If the value returned (charToInt) is between 0 and 9, then the character is a valid integer and can be converted. If not, then just print the original character.
I also changed cin to getline (documentation here) so that your input can accept multiple words.
void NumbertoCharacter(string n)
{
for (auto a : n) {
int charToInt = a - '0';
if (charToInt >= 0 && charToInt <= 9) {
switch (charToInt) {
case 1:
cout << "one ";
break;
case 2:
cout << "two ";
break;
case 3:
cout << "three ";
break;
case 4:
cout << "four ";
break;
case 5:
cout << "five ";
break;
case 6:
cout << "six ";
break;
case 7:
cout << "seven ";
break;
case 8:
cout << "eight ";
break;
case 9:
cout << "nine ";
break;
case 0:
cout << "zero ";
break;
default:
cout << "invalid ";
break;
}
}
else {
cout << a;
}
}
cout << endl;
}
int main(int argc, char** argv) {
string n;
getline(cin, n);
NumbertoCharacter(n);
return 0;
}
|
74,237,535
| 74,237,576
|
Understanding const pointer to const reference
|
below is a code snippet i don't quite understand.
See the two lines with comment.
Happy for every explanation or reference to a side where i can find an explanation.
I don't quite understand whats going on and what is wrong at out = *command;
#include <variant>
struct Buffer
{
struct StructA
{
int X = 10;
};
struct StructB
{
int X = 20;
};
std::variant<StructA, StructB> Data = StructA{};
template <typename T>
bool GetCommand(T& out)
{
T* command = std::get_if<T>(&Data);
if (command != nullptr)
{
out = *command;
return true;
}
return false;
}
template <typename T>
bool GetCommand(const T& out) const
{
const T* command = std::get_if<T>(&Data);
if (command != nullptr)
{
out = *command; // don't understand
return true;
}
return false;
}
};
void dummy(const Buffer& buffer)
{
Buffer::StructB tmpStruct;
auto tmpBuffer = buffer;
tmpBuffer.GetCommand<Buffer::StructB>(tmpStruct);
buffer.GetCommand<Buffer::StructB>(tmpStruct); // don't compile
}
int main()
{
Buffer buffer;
dummy(buffer);
return 0;
}
Hoping for a code explanation.
|
In
bool GetCommand(const T& out) const
{
const T* command = std::get_if<T>(&Data);
if (command != nullptr)
{
out = *command; // don't understand
out is a reference to a const object of type T. out refers to an object that can't be modified. When you try to call the assignment operator = on it, you try to modify out. So the compiler disallows it.
|
74,237,761
| 74,237,982
|
RAII function invoke
|
Is there a class in standard library that will invoke provided function in its destructor?
Something like this
class Foo
{
public:
template<typename T>
Foo(T callback)
{
_callback = callback;
}
~Foo()
{
_callback();
}
private:
std::function<void()> _callback;
};
auto rai = Foo([](){ cout << "dtor";});
|
there is an experimental scope_exit
example: https://godbolt.org/z/4r54GYo33
|
74,237,841
| 74,238,109
|
What would be the fastest way to find an item by a multivalued key in C++?
|
I will need to parse thousands upon thousands of simple entries in C++. I've only ever programmed in C, so I might be missing some higher functions to make this task easier.
An entry consists of 4 separate values: a sender, a receiver, a date, and a type of mail. Three of these are string values, the last one is an integer. My goal (after all entries are processed) is to print out all different entries that were received on the input and how many times each of these entries were received.
That means that if there was the same sender, receiver, date, and type of mail, multiple times on the input, the output would say that this entry was received e.g 5 times.
What would be the best way to do this? I tried C++ map but was unable to make it work.
|
I suggest defining a class with an operator< that makes it possible to store instances of the class in a std::map. The std::map can be used to map from objects comparing equal to a count. Objects are considered equal if neither lhs < rhs nor rhs < lhs is true so only the operator< overload is necessary.
You could also add operator>> and operator<< overloads to make it possible to read and write objects from streams.
It could look like this:
#include <iostream>
#include <map>
#include <string>
#include <tuple>
struct foo {
std::string sender;
std::string receiver;
std::string date;
int type_of_mail;
// compare two foo instances:
bool operator<(const foo& rhs) const {
return std::tie(sender, receiver, date, type_of_mail) <
std::tie(rhs.sender, rhs.receiver, rhs.date, rhs.type_of_mail);
}
// read a foo from an istream:
friend std::istream& operator>>(std::istream& is, foo& f) {
return is >> f.sender >> f.receiver >> f.date >> f.type_of_mail;
}
// write a foo to an ostream:
friend std::ostream& operator<<(std::ostream& os, const foo& f) {
return os << f.sender << ' ' << f.receiver << ' ' << f.date << ' '
<< f.type_of_mail;
}
};
int main() {
std::map<foo, unsigned> counts;
foo tmp;
// read foos from any istream
while(std::cin >> tmp) {
++counts[tmp]; // count
}
// print the count for each
for(const auto&[f, count] : counts) {
std::cout << count << ' ' << f << '\n';
}
}
Demo
|
74,238,462
| 74,238,533
|
How to call a function returning a function pointer?
|
I am trying to call a function pointer using an explicit dereference. But the compiler throws an error:
no operator "*" matches these operands.
Here's a simplified version of my code:
#include <functional>
#include <iostream>
int add(int a, int b)
{
return a + b;
}
std::function<int(int, int)> passFunction()
{
return &add;
}
int main()
{
int a{ 1 };
int b{ 2 };
std::cout << (*passFunction())(a, b);
return 0;
}
The thing is, it works fine when I just write:
std::cout << passFunction()(a, b); // without asterix.
which blows my mind.
I thought that, I messed up parentheses in function call. I tried different order and precedence, I called it with ampersand, and still compiler doesn't even flinch.
|
I'm trying to call a pointer function using a explicit dereference. But compiler throws an error: 'no operator "*" matches these operands'.
Type matters!
The return type of the passFunction is not a pointer, rather std::function. It is not something of dereferencable. Hence, you get the compiler error.
The thing is, it works fine when I just write: std::cout << passFunction()(a, b); without asterix [...]
The std::function meant for invoking/ calling. That is why passFunction()(a, b) works. It calls the operator() of std::function (i.e. equivalent to passFunction().operator()(a, b)).
That been said, if you simply return the function by auto (since C++14 - let the compiler deduce the type), you will get the actual function pointer, which you can call
either by dereferencing (unnecessary) or directly.
using fun_t = int(*)(int, int); // function pointer type
auto passFunction()
//^^^ use auto --> type == int(*)(int, int)
// or use fun_t
{
return &add;
}
now you can (unnecessary)
(*passFunction())(a, b);
or
passFunction()(a, b);
Now obvious question, "how the both syntax is possible ?".
Read here in detail in the following posts:
How does dereferencing of a function pointer happen?
Calling a function through a function pointer - dereference the pointer or not? What's the difference?
Why can we dereference a function pointer?
|
74,238,517
| 74,238,659
|
Computing data on the fly vs. pre-computed table
|
A common technique to save some time is to use a table of pre-computed values, instead of computing the value every time on the fly. For example, for an integer approximation of the logarithm.
Could it happen that fetching a value for the first time will actually be slower than computing it because the table is far away in memory (the DATA segment) and the access causes a page load? Is there a way to force the compiler to put the data closer to the code?
I presume the table has to be static, does it then matter if it is declared locally inside the given function or globally?
|
Could it happen that fetching a value for the first time will actually be slower than computing it because the table is far away in memory (the DATA segment) and the access causes a page load?
Yes, it can happen. Whether a page load occurs is not the only consideration. This question is hugely dependent on system architecture and the tools used. It can be affected by whether instruction cache and data cache are separate (and on which levels of cache), the cache geometry, other patterns of use in the program, processor speed, cache speed, memory speed, and more.
Sometimes high-performance software will “touch” data it does not need now but anticipates using in the future to cause it to be loaded into memory and/or cache so that it is available more quickly in the future when it is needed. (Depending on system architecture and circumstances, loading data from memory may not slow down any subsequent instructions if no such instruction uses the loaded data, or there may be instructions that explicitly request data be loaded into cache but not a CPU register.)
Is there a way to force the compiler to put the data closer to the code?
When it is necessary to closely control where code and data are located in memory, it is usually done with features of the linker and/or program loader, and possibly with assembly language, not with the compiler. The involvement of the compiler may be to compile small functions into separate object modules or with features allowing functions to be separated from other functions, so that the linker can rearrange the functions and data according to other instructions it is given.
I presume the table has to be static,…
No, “static” in the sense of object lifetime is largely irrelevant to system performance except in systems where some data may be put into ROM or different types of memory with different performance characteristics. As for how the keyword static affects identifier scope, that is irrelevant to memory performance.
… does it then matter if it is declared locally inside the given function or globally?
Not unless the compilers has features for controlling the location of one of these and not the other.
|
74,238,912
| 74,242,494
|
How to copy entire vector into deque using inbuilt function? (in C++)
|
I want to know that is there any inbuilt function to do this task
vector<int> v;
deque<int> d;
for(auto it:v){
d.push_back(it);
}
I just know this way to copy the values of a vector in deque and I want to know is there any inbuilt function to perform this task
|
As Pepijn Kramer said in the comments 1 and 2, you can use the overload (2) for the assign member function that takes a range
d.assign(v.begin(),v.end());
or use the iterator-range constructor, overload (5)
std::deque<int> d{v.begin(),v.end()};
Or in C++23, you can do
auto d = std::ranges::to<std::deque>(v);
|
74,240,374
| 74,240,488
|
MSVC insists on using inaccessible member from private-inherited base class, although it was re-defined as public
|
Consider this code (godbolt example):
#include <cstddef>
template< std::size_t... sizes >
class Impl
{
public:
static constexpr std::size_t
getDimension()
{
return sizeof...( sizes );
}
};
template< std::size_t... sizes >
class Wrapper : private Impl< sizes... >
{
using BaseType = Impl< sizes... >;
public:
//using BaseType::getDimension;
static constexpr std::size_t
getDimension()
{
return BaseType::getDimension();
}
};
template< typename A, typename B >
class Combine : public B
{
public:
static_assert( A::getDimension() > 0 );
static_assert( B::getDimension() > 0 );
static_assert( A::getDimension() == B::getDimension(),
"dimensions of A and b do not match" );
};
int main()
{
Combine< Impl< 0 >, Wrapper< 0 > > a;
}
Trying to compile with MSVC (v19 with /std:c++17 /permissive-) leads to this error:
example.cpp
<source>(32): error C2247: 'Impl<0>::getDimension' not accessible because 'Wrapper<0>' uses 'private' to inherit from 'Impl<0>'
<source>(8): note: see declaration of 'Impl<0>::getDimension'
<source>(40): note: see declaration of 'Wrapper<0>'
<source>(40): note: see declaration of 'Impl<0>'
<source>(32): error C2248: 'Impl<0>::getDimension': cannot access private member declared in class 'Wrapper<0>'
<source>(8): note: see declaration of 'Impl<0>::getDimension'
<source>(40): note: see declaration of 'Wrapper<0>'
<source>(40): note: see reference to class template instantiation 'Combine<Impl<0>,Wrapper<0>>' being compiled
<source>(34): error C2247: 'Impl<0>::getDimension' not accessible because 'Wrapper<0>' uses 'private' to inherit from 'Impl<0>'
<source>(8): note: see declaration of 'Impl<0>::getDimension'
<source>(40): note: see declaration of 'Wrapper<0>'
<source>(40): note: see declaration of 'Impl<0>'
<source>(34): error C2248: 'Impl<0>::getDimension': cannot access private member declared in class 'Wrapper<0>'
<source>(8): note: see declaration of 'Impl<0>::getDimension'
<source>(40): note: see declaration of 'Wrapper<0>'
Compiler returned: 2
However, the code compiles fine with state-of-the-art compilers like GCC and Clang, so I assume the code is correct. Is there a workaround for MSVC? Note that the private inheritance has a purpose in the real world.
|
An ugly workaround exists. First, you might define an external template function for delegating to the proper class:
template<typename T>
constexpr std::size_t getDimension()
{
return T::getDimension();
}
This is so that MSVC's name resolution won't try to search in the private base classes. Then, you might write:
template< typename A, typename B >
class Combine : public B
{
public:
static_assert( ::getDimension<A>() == ::getDimension<B>(),
"dimensions of A and b do not match" );
};
The :: is not mandatory, it's only for cases where getDimension() would also take template arguments as a member function.
This hack uses the fact that it's a static member function; if you have a non-static member function, you need to pass this as well.
Example: https://godbolt.org/z/4fax9dGvd
An alternative solution is to wrap the call to an inner struct:
template< typename A, typename B >
class Combine : public B
{
public:
static struct {
static_assert( A::getDimension() == B::getDimension(),
"dimensions of A and b do not match" );
} verify;
};
This does not require a global template function to delegate, but neither is ran direcly inside your class (which might or might not be a problem).
|
74,240,401
| 74,240,606
|
Segmentation fault (core dumped) C++ error Process returned 139 (0x8B)
|
can someone please explain what's wrong with this code? I'm getting this error in console when I try to run the program.
#include <iostream>
#include <vector>
#include <unordered_map>
using namespace std;
int main()
{
vector<int> nums = {2,1,9,4,4,56,90,3};
int target = 8;
unordered_map<int,int> m;
for(int i=0;i<nums.size();i++){
m[nums[i]] = i;
}
int req_num;
for (int i=0; i<nums.size(); i++){
req_num = target - nums[i];
auto search = m.find(req_num);
int first = search->first;
int second = search->second;
if(first == req_num && second != i){
cout << second << endl;
}
}
return 0;
}
I'm not sure what I'm doing wrong. If someone can point out my error and explain what I did wrong, that'd be of great help!!
I tried running the program multiple times thinking it might be a build error. I'm getting the same result.
It was working fine until I changed map from ordered to unordered.
|
I bet you are doing two sum question. The reason for segmentation fault is find() may not always find your req_num in the map. By solving this problem, I modified your code and it works properly.
#include <iostream>
#include <vector>
#include <unordered_map>
using namespace std;
int main()
{
vector<int> nums = {2,1,9,4,4,56,90,3};
int target = 8;
unordered_map<int,int> m;
for(int i=0;i<nums.size();i++){
m[nums[i]] = i;
}
int req_num;
for (int i=0; i<nums.size(); i++){
req_num = target - nums[i];
auto search = m.find(req_num);
if(search == m.end()) continue;
int first = search->first;
int second = search->second;
if(first == req_num && second != i){
cout << second << endl;
}
}
return 0;
}
I also modified your code to make more sense.
#include <iostream>
#include <vector>
#include <unordered_map>
using namespace std;
int main()
{
vector<int> nums = {2,1,9,4,4,56,90,3};
int target = 8;
unordered_map<int,int> m;
for(int i=0;i<nums.size();i++){
m[nums[i]] = i;
}
int req_num;
for (int i=0; i<nums.size(); i++){
req_num = target - nums[i];
if(m.find(req_num) != m.end()){
if(m[req_num]!=i){
cout << m[req_num] << endl;
}
};
}
return 0;
}
|
74,240,461
| 74,248,581
|
QT - Draw a point at a user specified location in a 3D surface
|
I was going through the Surface example here
When the user clicks anywhere it draws a point
image surface
what i'd like to know is how to do this programatically,like, if a user gives 3 coordinates x, y, and z. How do I go about plotting such point?
|
you can add a custom item like so:
QImage color = QImage(2, 2, QImage::Format_RGB32);
color.fill(Qt::cyan);
QVector3D positionOne = QVector3D(2.0f, 2.0f, 0.0f);
QCustom3DItem* item = new QCustom3DItem(":/items/monkey.obj", positionOne,
QVector3D(0.0f, 0.0f, 0.0f),
QQuaternion::fromAxisAndAngle(0.0f, 0.0f, 0.0f, 45.0f),
color);
item->setScaling(QVector3D(0.1f, 0.1f, 0.1f));
m_graph->addCustomItem(item);
Note that the .obj file must be a mesh file, you can generate one with blender for example, just don't forget to triangulate the mesh before you export the .obj file.
|
74,240,755
| 74,240,991
|
Loading process and loading screens
|
Well, the purpose of the loading screens is to display to the user that data is being loaded (downloaded) in the background.
But where can you make loading screens, can you do it on every program as startup for example and how to do so?
|
Once you have learned to display windows on the screen, you simply... display a window on the screen, with a progress bar, and every time your program loads something, you make the progress bar bigger.
This only makes sense if your program loads lots of things when it starts (such as large pictures or game levels). You can't make the OS display a loading screen when it loads the program. And you probably don't need to, because it's already fast enough.
|
74,241,025
| 74,241,116
|
My sorting algorithm doesn't function due to unknown reason
|
I'm trying to create a sorting algorithm, it contains a nested loop which compares each element of the array to all other elements in the array, and if an element is greater in value than any of its succeeding elements, they switch places with each other. But for some reason my program won't output anything and exits with code 0, i.e. success.
Following is my code:
#include <iostream>
using namespace std;
void sortAlgo(int *a, int n){
int tmp;
for (int i=0; i<=n-1; i++){
for(int j=i+1; j<=n; j++){ //O(n^2)
if(a[i]>a[j]){
//LHS variable assumes RHS quantity
tmp=a[i]; //a[i] value stored in temp variable
a[i]=a[j]; //shifts a[j] value to a[i]
a[j]=tmp; //a[j] takes value of a[i]
}
}
}
for(int x=0; x<=n; x++){
cout<<a[x]<<" ";
}
}
int main(){
int arr[10]={1,2,3,5,23,12,4};
sortAlgo(arr, 7);
}
I am using VS Code.
|
It is not the bubble sort algorithm You are trying to implement the selection sort algorithm with redundant swaps.
These for loops
for(int j=i+1; j<=n; j++){ //O(n^2)
and
for(int x; x<=n; x++){
have invalid conditions that in general can result in undefined behavior if the passed array will have exactly n elements because the expression a[n] will access memory beyond the array..
Moreover in the second for loop the variable x was not initialized that again invokes undefined behavior.
Pay attention to that instead of to swap "manually" two elements
tmp=a[i]; //a[i] value stored in temp variable
a[i]=a[j]; //shifts a[j] value to a[i]
a[j]=tmp; //a[j] takes value of a[i]
you could use standard C++ function std::swap declared in the header <utility>.
|
74,241,402
| 74,241,542
|
How to default values to array when user input enter key using cin method?
|
I am writing a code where I am asking user to assign value to an array and where user press enter key I want to assign a default value to array elements. Any idea how to proceed with this ?
I have tried using cin.get() method but it is not working. Here is my code :
#include<iostream>
#include<math.h>
#include<cmath>
using namespace std;
int main()
{
int n;
cout << "Enter array size: ";
cin >> n;
double y[n];
string input;
for(int i=0; i<n; ++i)
{
cout << "Enter Initial Velocity ( Default Value 0 ) : " ;
y[i] = cin.get();
if (input=="")
{
y[i]=0.0;
}
else
{
y[i]=stod(input);
}
}
}
|
I've tried to change minimal things to make your code work as you've expected. Bunch of stuff can be improved, array creation (see this), not including useless files, reducing the scope of local variables, also don't use using namespace std - see this.
#include <math.h> //useless
#include <cmath> //useless
#include <iostream>
//#include <string> some compilers may require adding this
using namespace std; //bad practice
int main() {
int n;
cout << "Enter array size: ";
cin >> n;
double y[n]; //not a great way to create an array, fix this
cin.ignore();
//can be moved inside the loop
string input;
for (int i = 0; i < n; ++i) {
cout << "Enter Initial Velocity ( Default Value 0 ) : ";
getline(cin, input);
if (input == "") {
y[i] = 0.0;
} else {
y[i] = stod(input);
}
}
for (int i = 0; i < n; i++) {
if (i > 0) cout << ' ';
cout << y[i];
}
cout << '\n';
}
|
74,241,779
| 74,243,406
|
Creating std::span from pybind array
|
void createSpanfromNumpy(pybind11::array_t<int>& inputA, std::vector<int>& inputB)
{
auto addressA = inputA.data();
auto addressB = inputB.data();
std::span<int> testSpanA{ inputA.data(), inputA.size()};
std::span<int> testSpanB{ inputB.data(), inputB.size() };
//do stuff with span
}
Trying to grab references to the memory used in a numpy array and pass them around in C++ for convenience. In the example code above it works fine for a std::vector but when trying to create a span from numpy array (inputA), this compiler error is produced:
Error C2440 'initializing': cannot convert from 'initializer list' to 'std::span<int,4294967295>' numpytoSpan
What is it about the pointer coming out of the array_t.data() call which doesn't work?
|
inputA.data() is a const int*.
Use a std::span<const int> or use pybind11::array_t::mutable_data.
|
74,241,829
| 74,242,022
|
Create directory but only when not running from floppy
|
I'm trying to get my application to create a directory if it doesn't exist, but only if the application isn't running from a floppy drive. I'm making the assumption that drives A: and B: are floppies.
The code I'm using is as follows:
char drive[_MAX_DRIVE];
char dir[_MAX_DIR];
char filen[_MAX_FNAME];
char ext[_MAX_EXT];
char SourcePath[_MAX_PATH];
_splitpath( ProgramPath, drive, dir, filen, ext );
strcpy( SourcePath, drive );
strcat( SourcePath, dir );
strcat( SourcePath, "SYSBAK" );
if ( toupper( drive[0] ) != 'A' || toupper( drive[0] != 'B' ) {
if ( !FolderExist( SourcePath ) ) {
try {
_mkdir( SourcePath );
}
catch( ... ) {
// do nothing
}
}
}
I have also tried:
if ( stricmp( drive, "A:" ) != 0 || stricmp( drive, "B:" ) != 0 ) {
and:
if ( !(stricmp( drive, "A:" ) == 0) || !(stricmp( drive, "B:" ) == 0) ) {
and yet when I compile and run from a floppy, the directory still gets created. I've confirmed using printf that drive[0] is indeed equal to A when running from floppy drive A:. ProgramPath is defined earlier in the application as argv[0].
Can anybody tell me what I'm doing wrong? What is the correct code?
|
You have "typos", in particular, your || should be &&:
toupper(drive[0]) != 'A' && toupper(drive[0]) != 'B'
which is equivalent to
!(toupper(drive[0]) == 'A' || toupper(drive[0]) == 'B')
|
74,241,837
| 74,241,953
|
MSVC CRT Debug Heap assert passing C++ STL object across binary boundaries
|
We have a host application and a DLL both built with the same compiler settings etc both using the static CRT /MT. (tested with VS2019 and VS2022)
Eventually we pass some simple structs that contain a couple STL objects, for example
struct MyData
{
std::vector<std::string> entries;
};
...
MyData DLL::getMyData() const
{
// ...
return MyData();
}
Getting CRT heap assertion when we deallocate MyData on the host application side that has been retrieved via the DLL's API.
Debug Assertion Failed!
File: minkernel\crts\ucrt\src\appcrt\heap\debug_heap.cpp
Line: 996
Expression: __acrt_first_block == header
host.exe!free_dbg_nolock(void * const block, const int block_use) Line 996 C++
host.exe!_free_dbg(void * block, int block_use) Line 1030 C++
host.exe!operator delete(void * block) Line 38 C++
host.exe!operator delete(void * block, unsigned __int64 __formal) Line 32 C++
host.exe!std::_Deallocate<16,0>(void * _Ptr, unsigned __int64 _Bytes) Line 255 C++
host.exe!std::_Default_allocator_traits<std::allocator<std::_Container_proxy>>::deallocate(std::allocator<std::_Container_proxy> & _Al, std::_Container_proxy * const _Ptr, const unsigned __int64 _Count) Line 670 C++
host.exe!std::_Deallocate_plain<std::allocator<std::_Container_proxy>>(std::allocator<std::_Container_proxy> & _Al, std::_Container_proxy * const _Ptr) Line 976 C++
host.exe!std::_Delete_plain_internal<std::allocator<std::_Container_proxy>>(std::allocator<std::_Container_proxy> & _Al, std::_Container_proxy * const _Ptr) Line 989 C++
host.exe!std::string::~basic_string<char,std::char_traits<char>,std::allocator<char>>() Line 3007 C++
host.exe!DLL::MyData::~MyData() C++
Oddly enough no errors are reported with Address Sanitizer in MSVC (and also with ASAN using clang/Xcode), and the application seems stable in Release builds, but in MSVC Debug builds always breaks in the dtor.
Notably this is normally at least 1 call site away from the DLL interface, so is there some scenario in which RVO is moving the object away from the DLL without copying it at the binary boundary? Kind of stumped on this one.
|
The simplest way to fix this is usually just to DLL export the MyData class (from memory defining the dtor as virtual can also fix this problem). Without that, because the dtor is inline, you are getting two copies of the dtor, one in the original DLL, one in exe. DLL::getMyData() is creating a new instance using the heap in the DLL. The Exe is then trying to free the data members using its own heap. MSVC then spews an error because that heap didn't see the original allocation.
In the case of std::string (or any other container type in the stdlib), I'd strongly recommend making those members private at the very least. You could easily run into the same problem if a programmer directly accesses the member functions (e.g. reserve).
Normally MSVC would give you a C4251 warning for this problem. Sounds like this warning may have been disabled in your build?
|
74,242,934
| 74,243,010
|
How do I distinguish -std=c++17 and -std=gnu++17 at compile time? checking macros?
|
I am using the __int128 extension of g++. The problem with -std=c++17 is that some of the C++ library does not have all the support for that extension (i.e. std::make_unsigned<> fails). When using -std=gnu++17 it works fine.
I've added a header file that allows for the <limit> to work with __int128 when using -std=c++17 and I'd like to keep it for now, but when using -std=gnu++17 it breaks (because it is already defined). So I was thinking to add a condition like so:
#if !(<something>)
...
#endif
if the compiler already supports the limits with __int128.
My question is: what is that <something> I could check to distinguish between the standard and the GNU c++17 libraries?
|
I did this:
$ diff <(g++-11 -std=c++17 -E -dM -x c++ /dev/null|LC_ALL=C sort) \
<(g++-11 -std=gnu++17 -E -dM -x c++ /dev/null|LC_ALL=C sort)
And the output was:
180a181,182
> #define __GLIBCXX_BITSIZE_INT_N_0 128
> #define __GLIBCXX_TYPE_INT_N_0 __int128
315d316
< #define __STRICT_ANSI__ 1
424a426,427
> #define linux 1
> #define unix 1
That's not definitive, of course, but it's maybe a start.
So you could check for __STRICT_ANSI__ (indicating that there are no Gnu extensions), but perhaps the undocumentable __GLIBCXX_BITSIZE_INT_N_0 is more direct.
|
74,243,826
| 74,244,054
|
GMock EXPECT_CALL returns FAILED while comparing two char arguments inside the method
|
As the tittle, I'm using gmock to test my feature. But one of the issue occurred that EXPECT_CALL always check address of 2 char array instead of their value.
Below is my code example:
Base.h
//Create singleton class
class Base {
private:
static Base* _ptrInstance;
public:
static Base* getInstance();
void sendString(const char* text, int value);
};
Base.cpp
#include "Base.h"
Base* Base::_ptrInstance = NULL;
Base* Base::getInstance(){
if ( NULL == _ptrInstance ){
_ptrInstance = new Base();
}
return _ptrInstance ;
}
void Base::sendString(const char* text, int value){
//do something
}
Here is the function that need to be tested:
test.cpp
#include <iostream>
#include "Base.h"
int Test(){
Base* poBase;
char text[] = "hello_world";
poBase->getInstance()->sendString(text, 0);
return 0;
}
my MOCK method:
MOCK_METHOD2(sendString, void (const char* text, int value));
here is my test case:
TEST_F(myTest, sendStringTest){
EXPECT_CALL(*BaseMock, sendString("hello_world", 0));
Test();
}
When I execute my test. It always return above test case FAILED:
Expected arg #0: is equal to 0x56e88a0d pointing to "hello_world"
Actual: 0xffcb1601 pointing to "hello_world"
Expected: to be called once
Actual: never called - unsatisfied and active
With given failure, I though that EXPECT_CALL is comparing argument addresses instead of their value.
(Here, text[] address created in Test.cpp and "hello_world" address inside EXPECT_CALL)
Is anyone know how to overcome this failure?
Many thanks.
|
With
EXPECT_CALL(*BaseMock, sendString("hello_world", 0));
You compare pointer.
According to reference/matchers.md, You might use StrEq
EXPECT_CALL(*BaseMock, sendString(StrEq("hello_world"), 0)));
|
74,243,898
| 74,243,979
|
How can I pass bass and it's derived classes object's in a same method?
|
Let's say I've a class fruit and several class like apple,mango etc,Now i want to create a single function that would accept fruit and it's all derived classes object's as argument,How can i do so?
I have not tried anything yet!
|
You can make the function parameter to be a reference or a pointer to the base class as shown below.
void eat(const Fruit* f)
{
//code here
}
Or
void eat(const Fruit& f)
{
//code here
}
You can remove the low-level const if you want to be able to make changes through f.
|
74,243,962
| 74,244,060
|
Copy data to struct in a serial way in C++
|
I have a packed struct like this one below
struct MyStruct
{
uint8_t a;
uint32_t b;
uint16_t c;
uint8_t d[4];
uint16_t e[2];
uint32_t crc_field;
} __attribute__((packed));
I want to store data to it in a serial way, meaning that I want to use memcpy or for loop to copy data from an array into this struct above.
Consider the example:
uint8_t Buffer[64]; //We assume it contains some data.
MyStruct foo;
memcpy(&foo, Buffer, sizeof(MyStruct));
Assuming that I always check the crc_field and the data into the Buffer was stored in the opposite way (from another MyStruct earlier, to Buffer). Would there be any issue with this approach?
Would it be better to use a union instead?
union MyUnion
{
uint8_t raw_data[19];
struct MyStruct
{
uint8_t a;
uint32_t b;
uint16_t c;
uint8_t d[4];
uint16_t e[2];
uint32_t crc_field;
} __attribute__((packed));
}
uint8_t Buffer[64]; //We assume it contains some data.
MyUnion foo;
memcpy(foo.raw_data, Buffer, sizeof(MyStruct));
|
The MyStruct class is trivially copyable. It means that you can safely copy its byte representation in a buffer, and when you will copy back that representation into a MyStruct object, that object will take the original value. No need for an union here.
Simply, as the representation of scalar type is not defined in the standard, using that to exchange data between different systems or application compiled with different options could invoke Undefined Behaviour. It may not be a concern, if your compiler documents the standard type and struct representations (probable as you are using the non standard __attribute__((packed))). Simply it is restricted to one environment and is no longer a portable C++ program...
|
74,244,055
| 74,250,856
|
In C++ get smallest integer type that can hold given amount of bits
|
If I have compile time constant num_bits how can I get smallest integer type that can hold this amount of bits?
Of course I can do:
Try it online!
#include <cstdint>
#include <type_traits>
std::size_t constexpr num_bits = 19;
using T =
std::conditional_t<num_bits <= 8, uint8_t,
std::conditional_t<num_bits <= 16, uint16_t,
std::conditional_t<num_bits <= 32, uint32_t,
std::conditional_t<num_bits <= 64, uint64_t,
void>>>>;
But maybe there exists some ready made meta function in standard library for achieving this goal?
I created this question only to find out single meta function specifically from standard library. But if you have other nice suggestions how to solve this task besides my proposed above solution, then please post such solutions too...
Update. As suggested in comments, uint_leastX_t should be used instead of uintX_t everywhere in my code above, as uintX_t may not exist on some platforms, while uint_leastX_t always exist.
|
There is currently no template type for this in the c++ standard library.
The standard library does have something to do the reverse of what you're looking for (std::numeric_limits<T>::max).
There is a proposal to add such functionality to the c++ standard library in p0102: "C++ Parametric Number Type Aliases". A similar question to yours has been asked on lists.isocpp.org/std-proposals, where p0102 was mentioned.
In case you're interested / okay with using Boost, Boost has boost::int_max_value_t<V>, which is a struct defining type members least (smallest) and fast ("easiest to manipulate").
|
74,244,075
| 74,244,239
|
When I try to use string.insert() it tells me there is a length error at memory location... How do I fix this?
|
#include <iostream>
#include <string>
#include <cstring>
using namespace std;
string empty(string str) {
for (int i = 0;i < str.length();i++) {
if (str[i] == ' ') {
str.insert(str[i], ",");
}
cout << str[i];
}
return st;
}
int main() {
string str;
getline(cin, str);
empty(str);
return 0;
}
I tried string.resize, or in loop i<str.max_size, str.size and str.replace, I tried to add +1 to size or to length but nothing works.
|
You are getting a runtime error because the 1st parameter of insert() expects an index but you are passing it a char instead, and your input string happens to contain a character whose numeric value is larger than the string's length.
After fixing that, you will then run into a new problem with the loop getting stuck running endlessly once an insert occurs. This is because you are inserting the comma character BEFORE the current space character. So each insert pushes the current space character (and subsequent characters) forward 1 position. Then the next loop iteration increments i skipping the newly inserted comma character but then finds the same space character again, so it inserts again. And again. And again. Endlessly.
To prevent that, you would need to incremement i by an extra +1 after an insert to skip the space character.
Try this:
void empty(string str) {
for (size_t i = 0; i < str.length(); ++i) {
if (str[i] == ' ') {
str.insert(i, ",");
++i;
}
}
cout << str;
}
|
74,244,875
| 74,255,124
|
Access vertex via vertex_index property?
|
Is there a built-in way in BGL to access a vertex via its vertex_index property?
using VertexProperties = boost::property<boost::vertex_index_t, int>;
using DirectedGraph = boost::adjacency_list<
boost::listS,
boost::vecS,
boost::directedS,
VertexProperties,
boost::no_property>;
using VertexDescriptor = boost::graph_traits<DirectedGraph>::vertex_descriptor;
using EdgeDescriptor = boost::graph_traits<DirectedGraph>::edge_descriptor;
int main()
{
DirectedGraph g;
auto indices = boost::get(boost::vertex_index, g);
boost::add_vertex(DirectedGraph::vertex_property_type{123123}, g); // unique index
boost::add_vertex(DirectedGraph::vertex_property_type{451345}, g); // unique index
// how do I access the vertex with my unique index?
// I only see the way from vertex->vertex_index via the property map
indices[x];
}
|
By far the most user-friendly solution in this realm is to use internal name properties. This is a mixin-feature for boost::adjacency_list (implemented in maybe_named_graph).
Moreover, your approach "repurposing" vertex_index_t for this is probably misguided: vertex indices are a library "contract" which assumes that it maps all vertices vertices(g) to a contiguous integral domain [0, num_vertices(g)). Your ID doesn't satisfy those properties so it will lead to unspecified, and most likely Undefined Behaviour when you invoke BGL algorithms on your graph.
All in all, I'd redefine the properties as your own bundle:
struct VertexProps {
int my_id;
VertexProps(int id) : my_id(std::move(id)) {}
};
using Graph = boost::adjacency_list<boost::listS, boost::listS, boost::directedS, VertexProps,
boost::no_property>;
using V = Graph::vertex_descriptor;
using E = Graph::edge_descriptor;
Now you need to specialize some traits to tell maybe_named_graph that some part of VertexProps is your vertex name:
template <> struct boost::graph::internal_vertex_name<VertexProps> {
struct type {
using result_type = int;
int const& operator()(VertexProps const& vp) const { return vp.my_id; }
};
};
template <> struct boost::graph::internal_vertex_constructor<VertexProps> {
struct type {
auto operator()(int const& value) const { return VertexProps{value}; }
};
};
Now you can add vertices by name:
V a = add_vertex(123123, g); // unique index
V b = add_vertex(451345, g); // unique index
But instead of using a or b you can keep using the name for subsequent operations:
add_edge(123123, 451345, g);
You can get vertex descriptors from the property explicitly:
V va = *g.vertex_by_property(123123);
V vb = *g.vertex_by_property(451345);
As a surprise bit of user-friendliness, you don't even need to create vertices ahead of time:
add_edge(234234, 562456, g);
will magically add the two new, named, vertices.
Live Demo
Live On Coliru
#include <boost/graph/adjacency_list.hpp>
#include <boost/graph/graph_utility.hpp>
struct VertexProps {
int my_id;
VertexProps(int id) : my_id(std::move(id)) {}
};
template <> struct boost::graph::internal_vertex_name<VertexProps> {
struct type {
using result_type = int;
int const& operator()(VertexProps const& vp) const { return vp.my_id; }
};
};
template <> struct boost::graph::internal_vertex_constructor<VertexProps> {
struct type {
auto operator()(int const& value) const { return VertexProps{value}; }
};
};
using Graph = boost::adjacency_list<boost::listS, boost::listS, boost::directedS, VertexProps,
boost::no_property>;
using V = Graph::vertex_descriptor;
using E = Graph::edge_descriptor;
int main() {
Graph g;
print_graph(g, get(&VertexProps::my_id, g), std::cout << "\n --- Empty:\n");
auto my_id = get(&VertexProps::my_id, g);
V a = add_vertex(123123, g); // unique index
V b = add_vertex(451345, g); // unique index
print_graph(g, my_id, std::cout << "\n --- Vertices:\n");
std::cout << g[a].my_id << std::endl;
std::cout << g[b].my_id << std::endl;
add_edge(123123, 451345, g);
print_graph(g, my_id, std::cout << "\n --- Edge:\n");
V va = *g.vertex_by_property(123123);
V vb = *g.vertex_by_property(451345);
add_edge(vb, va, g);
print_graph(g, my_id, std::cout << "\n --- Edges:\n");
add_edge(234234, 562456, g);
print_graph(g, my_id, std::cout << "\n --- Magically added:\n");
}
Prints
--- Empty:
--- Vertices:
123123 -->
451345 -->
123123
451345
--- Edge:
123123 --> 451345
451345 -->
--- Edges:
123123 --> 451345
451345 --> 123123
--- Magically added:
123123 --> 451345
451345 --> 123123
562456 -->
234234 --> 562456
More and Caveats
There are a number of interesting edge cases around this. You can read more about them here:
How to use boost graph algorithm on a named graph?
|
74,245,830
| 74,246,419
|
Unexpected termination after the first input
|
When I put 1000000000000 as the input the program terminates, and does not allow the user to enter x.
#include <bits/stdc++.h>
using namespace std;
int main(){
int n;
cin>>n;
int arr[n];
int x;
cin>>x;
int k=0;
for (int i = 1; i <= n; i+=2)
{
arr[k]=i;
k++;
}
for (int j = 2; j <= n; j+=2)
{
arr[k]=j;
k++;
}
cout<<arr[x-1];
}
I was expecting to be able to enter the second input after 1000000000000.
|
Your program has some errors, which even do not allow for compilation.
I added comments to your source code to show yout the problems:
#include <bits/stdc++.h> // This is not a C++ header. it is a comiler extension. Do not use it
using namespace std; // Do not use it. Always use scoped identifiers
int main() {
int n; // Too small to hold the expected value
cin >> n; // No input check (Try to ebnter "abc"), Always check result of input
int arr[n]; // This is not C++. No C++ compliant compiler will compile this
int x; // Variable not initialized
cin >> x; // No check for valid input and no check for x must be smaller than n
int k = 0;
// ----- All not necessary. Can be solved mathematically
for (int i = 1; i <= n; i += 2)
{
arr[k] = i;
k++;
}
for (int j = 2; j <= n; j += 2)
{
arr[k] = j;
k++;
}
// -------
cout << arr[x - 1]; // potential out of bounds
}
Then, of course your desired input value is too big. You can most probably not allocate a 4 TeraByte array on your computer. It does not have enough memory. With a "big" machine and "swapping" it could work. But for nomal machines, it will not work.
Then, next, you need always to check the result of an IO operation.
In your case, you try to read a huge number into an int variable. The input stream will fail and do nothing any longer. You cannot read additional values after that. You would need to call std::cin.clear() to continue.
So, always check the result of your IO operation. But how?
You need ot understand the streams >> operator a little bit. Please read here about that. It is especially important to understand that all those "extraction" expressions, like "someStream >> x" always return a reference to a stream. So "someStream >> x" will extract a value from the stream, format it to the expected type given by the varible "x" and then assigns the value to "X".
And, it will always return a reference to the stream, for which it was called with. In this case a reference to "someStream".
If it cannot do this extraction or formatting, because you enter an invalid value, for example "abc" for an integer or a huge number for an integer, then the operation would fail.
And there are 2 functions of the stream, that you can use to check the state of the stream.
operator bool. This will return true, if there is no error
operator!. This will return true, if there was an error
With that: If you write if (cin >> n), then the expression in the brackets is evaluated. It tries to read "n" from "cin". After that it returns a reference to "cin". Now the statement would look like if (cin). The if statement expects a boolean expression. Hence, the bool function of the stream will be called, and give you the information, if the operation was successfull or not.
Summary. Always check the result of IO operations.
Back to your code. Making it compilable would be:
#include <iostream>
int main() {
long long n;
std::cin >> n;
int* arr = new int[n];
int x;
std::cin >> x;
int k = 0;
for (int i = 1; i <= n; i += 2)
{
arr[k] = i;
k++;
}
for (int j = 2; j <= n; j += 2)
{
arr[k] = j;
k++;
}
std::cout << arr[x - 1] << '\n';
delete[] arr;
}
But this is of course still nonesense. Because we do not need an array or loop at all. We can simply calculate the result.
See the final example code below:
#include <iostream>
int main() {
// Define working variables
long long n{}, x{};
// Read values and check input
if ((std::cin >> n >> x) and (n > 0) and (x > 0) and (x <= n))
// Show result
if (x > (n / 2))
std::cout << ((x - (n / 2)) * 2) << '\n';
else
std::cout << (x * 2 - 1) << '\n';
else
std::cerr << "\n\n***Error: Invalid input\n\n";
}
|
74,245,946
| 74,245,968
|
Change color of single entity in OpenGL scene?
|
I'm trying to change the color of a single polygon shape (quads) in my exercises using legacy OpenGL (Freeglut+Glew), but obviously when i call glColor3f the colors of the entire models (texts included) in the scene get overwritten by the new color state set by the function.
The code is fairly simple:
void drawScene(void) {
// Cleaning procedures
glClear(GL_COLOR_BUFFER_BIT);
// Setting the color of the models as green
glColor3f(0.0, 1.0, 0.0);
// Drawing a square that will rotate based on the mouse coord on the screen
glPushMatrix();
glRotated(theta, 0.0, 0.0, 1.0);
glBegin(GL_QUADS);
// HERE: i'd like to change only the color of the square, but instead
// everything get modified to the new color.
glColor3f(colors.r, colors.g, colors.b);
glVertex3f(-2.0, 2.0, 0.0);
glVertex3f(-2.0, -2.0, 0.0);
glVertex3f(2.0, -2.0, 0.0);
glVertex3f(2.0, 2.0, 0.0);
glEnd();
glPopMatrix();
// Text formatting
auto coords = std::format("Mouse coordinate is x ({}) , y ({})", xm, ym);
auto angle = std::format("Rotation angle is {}", xm/10);
glRasterPos3f(-6.0, -6.0, 0.0);
writeBitmapString(GLUT_BITMAP_9_BY_15, coords.data());
glRasterPos3f(-6.0, -8.0, 0.0);
writeBitmapString(GLUT_BITMAP_9_BY_15, angle.data());
// Swapping buffer to the back and front
glutSwapBuffers();
}
Any suggestion on how to achieve single coloring on a model?
|
glColor3f sets a global state. This state is kept until it is changed again. Set the state explicitly before each object. Alternatively, you can reset the color attribute to "white" after drawing an object.
glColor3f(colors.r, colors.g, colors.b);
glBegin(GL_QUADS);
glVertex3f(-2.0, 2.0, 0.0);
glVertex3f(-2.0, -2.0, 0.0);
glVertex3f(2.0, -2.0, 0.0);
glVertex3f(2.0, 2.0, 0.0);
glEnd();
glColor3f(1.0f, 1.0f, 1.0f);
|
74,246,134
| 74,252,704
|
Single-token and multi-token positional options with boost::program_options
|
I am writing a program that would receive as parameters a filename followed by multiple strings. Optionally it could take a -count argument;
./program [-count n] <filename> <string1> <string2> ...
This is the code I wrote:
PO::positional_options_description l_positionalOptions;
l_positionalOptions.add("file", 1);
l_positionalOptions.add("strings", -1);
PO::options_description l_optionsDescription;
l_optionsDescription.add_options()
("count", PO::value<int>()->default_value(1), "How many times to write"),
("file", PO::value<std::string>(), "Output file name"),
("strings", PO::value<std::vector<std::string>>()->multitoken()->zero_tokens()->composing(), "Strings to be written to the file");
PO::command_line_parser l_parser {argc, argv};
l_parser.options(l_optionsDescription)
.positional(l_positionalOptions)
.allow_unregistered();
PO::variables_map l_userOptions;
try {
PO::store(l_parser.run(), l_userOptions);
}
catch (std::exception &ex) {
std::cerr << ex.what() << std::endl;
exit(1);
}
However, when I run ./program file.out str1 str2 str3 it fails with:
unrecognised option 'str1'
What am I doing wrong? Is this even possible with boost::program_options?
|
I figured it out. It's as dumb as it can get.
The issue was that I had a , after every entry in add_options().
This made it so only the first entry would get saved.
|
74,247,792
| 74,248,420
|
Scale object in 2d to exactly overlay itself
|
I am trying to render an outline using Vulkan's stencil buffers. This technique involves rendering the object twice with the second one being scaled up in order to account for said outline. Normally this is done in 3D space in which the normal vectors for each vertex can be used to scale the object correctly. I however am trying the same in 2D space and without pre-calculated normals.
An Example: Given are the Coordinates I, H and J and I need to find L, K and M with the condition that the distance between each set of parallel vectors is the same.
I tried scaling up the object and then moving it to the correct location but that got me nowhere.
I am searching for a solution that is ideally applicable to arbitrary shapes in 2D space and also somewhat efficient. Also I am unsure if this should be calculated on the GPU or the CPU.
|
Lets draw an example of a single point of some 2D polygon.
The position of point M depends only on position of A and its two adjacent lines, I have added normals too - green and blue. Points P and Q line on the intersection of a shifted and non-shifted lines.
If we know the adjacent points of A - B , C and the distances to O and P, then
M = A - d_p * normalize(B-A) - d_o * normalize(C-A)
this is true because P, O lie on the lines B-A and C-A.
The distances are easy to compute from the two-color right triangles:
d_p=s/sin(alfa)
d_o=s/sin(alfa)
where s is the desired stencil shift. They are of the course the same.
So the whole computation, given coordinates of A,B,C of some polygon corner and the desired shift s is:
b = normalize(B-A) # vector
c = normalize(C-A) # vector
alfa = arccos(b.c) # dot product
d = s/sin(alfa)
M = A - sign(b.c) * (b+c)*d
This also proves that M lies on the alfa angle bisector line.
Anyway, the formula is generic and holds for any 2D polygon, it is easily parallelizible since each point is shifted independently of others. But
for non-convex corners, you need to use the opposite sign, we can use dot product to generalize.
It is not numerically stable for b.c close to zero i.e. when b,c lines are almost parallel, in that case I would recommend just shifting A by d*n_b where n_b is the normalized normal of B-A line, in 2D it is normalize((B.y - A.y, A.x-B.x)).
|
74,247,829
| 74,248,282
|
Can Cmake add 'time' before ./main in the command line to measure program execution time?
|
I am wanting to measure the time it takes for my C++ video processing program to process a video. I am using CLion to write the program and have Cmake set up to compile and automatically run the program with a test video. However, in order to find execution time I have been using the following command in the MacOS terminal:
% time ./main ../Media/test_video.mp4
Is there a way for me to configure Cmake to automatically include time in the execution of ./main to streamline my process further?
So far I've tried using set(CMAKE_ARGS time "test_video.mp4") and some command line argument functions but they don't seem to be acting in the way that I'm looking for.
|
It is possible to use add_custom_target to do what you want. I'll not consider this option further as it seems abusing the build system for something it wasn't designed to do. Yet it may have an advantage over using CLion configuration: it would be available to be used outside of CLion. That advantage seems minor: why not run the desired command directly in those contexts?
The first CLion method would be to define an external tool which run time on the current build target. In File|Settings...|Tools|External Tools define a new tool with /bin/time as program, $CMakeCurrentProductFile$ $Prompt$ as arguments. When choosing that tools (in Tools|External Tools) it will now prompt you for the argument and then run /bin/time on the current target with the provided arguments. Advantage: you don't have to define the tool once, it will be available in every project. Inconvenients: the external tools are available in every project, thus it doesn't make sense to be more specific than $Prompt$ for the arguments and the working directory; it isn't possible to specify environment variables, it isn't possible to enforce the need of a build before running the command.
The second CLion method would to be define a Run/Debug Configuration. Again use /bin/time as program (chose "Custom Executable"), specify $CMakeCurrentProductFile$ as first argument (here it makes sense to provide the other arguments as desired, but note that $Prompt$ is still a valid choice if needed). Advantages: it makes sense to be as specific as needed; you have all the feature of configurations (environment variables, input redirections, specifying actions to be executed before the execution). Inconvenient: it doesn't work with other CLion features which assume that the program is the target such as the debugger, the profiler, ... so you may have to duplicate configurations to get them.
Note that the methods aren't exclusive: you can define an external tools and then add configurations for the case where it is more convenient.
|
74,248,524
| 74,249,421
|
Trouble reading integer metadata from file
|
Trying to read the file size data from a bitmap file. I know that I have the offset (0x02) correct and can find the correct file size in a hex editor.
uint32_t getBMPSize(string bmpPath) {
char sizeread[2] = {};
uint32_t size;
ifstream bmpFile;
bmpFile.open(bmpPath);
uint16_t offset = 0x02;
if (bmpFile.is_open()) {
bmpFile.seekg(offset);
bmpFile.read(sizeread, 1);
bmpFile.close();
}
size = *sizeread;
// Convert size from little-endian to big endian
size = (size >> 24) |
((size << 8) & 0x00FF0000) |
((size >> 8) & 0x0000FF00) |
(size << 24);
return size;
}
I am expecting to get (8A 7B 0C 00) returned to sizeread. This should then be converted to a big-endian 32bit unsigned integer as 818058. As it stands now, the function returns 2332033023.
|
Thanks. Fixed by reading directly to uint32_t using bmpFile.read(reinterpret_cast<char*>(&size), 4).
|
74,249,383
| 74,249,457
|
std::condition_variable not working with std::this_thread::sleep_for()
|
Could someone explain why this code does not work as expected when the sleep call in producer() is uncommented? It works fine if the sleep call is moved out of the scope of the unique_lock.
To clarify:
Working as expected: producer() creates new messages that are stored in data those messages are then printed by consumer().
What happens when the sleep call is uncommented: no output is produced. I would expect new messages to be produced and printed at 1 second intervals.
#include <atomic>
#include <condition_variable>
#include <iostream>
#include <mutex>
#include <ratio>
#include <string>
#include <thread>
#include <queue>
std::string data;
std::atomic<bool> ready;
std::condition_variable cv;
std::mutex m;
void producer()
{
int c{};
while (true)
{
++c;
{
std::unique_lock<std::mutex> lock{ m };
data = "count: " + std::to_string(c);
// std::this_thread::sleep_for(std::chrono::seconds{ 1 }); // THIS LINE
ready = true;
}
cv.notify_one();
}
}
void consumer()
{
while (true)
{
std::unique_lock<std::mutex> lock{ m };
cv.wait(lock, []{ return ready == true; });
std::cout << data << '\n';
ready = false;
}
}
int main()
{
std::thread tp{ producer };
std::thread tc{ consumer };
tp.join();
tc.join();
}
|
Each iteration of the while loop in the producer thread takes a lock, computes data, sets ready=true, and then unlocks before repeating.
In the case of the sleep statement being uncommented out:
Since the mutex is still held by the OS while sleep is invoked, both the producer and consumer thread are effectively paused and are not being given any quantum to run.
Then the sleep statement completes. This enables the producer thread to wake up (with a full quantum). The producer thread, within the quantum time likely is attempting to do the following:
set ready=true
unlocks the mutex as a result of the scope completing
invokes notify_one to signal the consumer thread that it can get a shot. But keep in mind, the consumer thread is likely paused at this point while the producer thread is running.
re-acquire thes lock.
Meanwhile the consumer thread is blocked on the wait call. But it eventually gets signaled as available to run when the notify_one call is invoked by the producer thread. Then this likely happens:
The consumer thread will get woken up during step 3 of the producer thread above. The thread's register state and other stuff will get restored by the OS. This takes a few cycles. Then it attempts to acquire the lock itself. But by the time it trys to acquire the lock, the producer thread has grabbed it again, because it's already running and not blocked on anything.
Now, I ran your program with the sleep statement uncommented. It will occassionally print a count: XX line, but rarely. There's a small window in which the consumer thread can "win" the race with the producer thread and acquire the mutex. But it will take many iterations and some luck. In my local repro of building and running your original code with the sleep statement uncommented, the program printed this line exactly and nothing else after running for several minutes.
count: 41
It just happens to work when the sleep statement is commented out because the thread scheduler just happens to occasionally context switch between threads in between unlock and lock calls to the mutex. You might even have different behavior depending on the number of available cores on your system. As evidence of this, you can see that the consumer thread winds up skipping over a bunch of values on each consecutive print call. That's because the producer thread isn't ever blocked on i/o like the consumer thread is with the cout and wait statements.
count: 131
count: 134
count: 2661
count: 2804
count: 2880
...
I'm guessing you want to have the consumer thread print each new value assigned to data. If that's the case, unlock the mutex while sleeping. We can unlock and relock the mutex between sleep statements. But just swapping some lines will suffice:
void producer()
{
int c{};
while (true)
{
++c;
{
std::unique_lock<std::mutex> lock{ m };
data = "count: " + std::to_string(c);
ready = true;
}
cv.notify_one();
std::this_thread::sleep_for(std::chrono::seconds{ 1 });
}
}
This will result in consumer printing out each unique value of data instead of skipping over any:
count: 1
count: 2
count: 3
count: 4
count: 5
count: 6
|
74,249,425
| 74,249,528
|
C++: Have a class always cast as specified type
|
So I am not sure if this is at all possible in C++, but I would like to ensure that a class is always cast as another type by default.
Here is an example of what I would imagine this looks like:
class A : public always_cast<std::string> {
private:
std::string *actualData = new std::string("foo");
public:
operator std::string &(){
return *this->actualData;
}
};
void main(){
A value;
std::cout << value.append(" bar") << std::endl; // should print "foo bar"
}
Currently, the closest I can get to this behavior is to just override the cast operator, but I have to explicitly cast it in order to access std::string's member functions:
class A {
private:
std::string *actualData = new std::string("foo");
public:
operator std::string &(){
return *this->actualData;
}
};
void main(){
A value;
std::cout << ((std::string) value).append(" bar") << std::endl; // successfully prints "foo bar"
}
This is a simple example of how I would want to use this. I am trying to build a headers-only library involving lots of templates, and the "always casted" type could be any class type. Although the manual cast is not a deal breaker, it adds an extra step that I would like to allow developers using my library to skip, for their convenience and readability of their code. My library is currently using C++20.
So is there any way to achieve this (or similar) functionality that I cannot think of? Or is this just a pipe dream and I should just settle for having my library just slightly more inconvenient to use?
EDIT: Well alright, I can tell from the context of the responses that this not really possible, and probably not a very good design choice even if I could pull it off. I think the best solution for my use case is to override the -> operator and use that to access the fields, as some people have recommended in the comments.
|
What you're talking about is broadly conceptualized as the ability to overload "operator dot" in a similar way to how we can overload operator*. There have been many proposals in years past on providing this feature to C++, but none of them achieved consensus. My read of the committee is that, like UFCS, they've basically given up on it.
One of the big sticking points has been the question of conflicts. If your class A itself has an append function, which one gets called? Some say that it should always be the converted type; others say that it should always be the main class type; a few others say it should be a compile error.
And no, there is no other way to call a conversion function in order to make .whatever compile. Some "operator dot" suggestions used "base classes" as a way to settle all of the conflict questions (since those have all been answered for base classes), but at present, the language has no way to do what you want.
|
74,249,667
| 74,249,685
|
Why are normal C-array indices signed while stl indices are unsigned?
|
I understand why stl indices are unsigned, because you would never have a negative index. But for normal C arrays, the indices are signed. Why is this?
If there is a good reason for C array indices to be signed, why did they decide to make stl indices different?
|
Array indexing in C is really just a pointer offset. x[y] is exactly the same as *(x + y). That allows you to do things like this:
int a[3] = { 1, 2, 3 };
int *p = a; /* p points to a[0] */
printf("p[1]=%d\n", p[1]); /* prints 2 */
p += 2; /* p points to a[2] */
printf("p[-1]=%d\n", p[-1]); /* prints 2 */
Which is why negative array indexing is allowed.
|
74,250,094
| 74,256,115
|
partial template specialization and deduction guides
|
I have a question on when deduction guides need to be specified, in particular why explicitly defining the constructor in the base template below removes the requirement for it. IOW, either the constructor in the base template needs to be defined or the deduction guide is needed for the snippet below to compile. I'm guessing it relates to deduction of the base template parameter requiring the constructor but would appreciate the clarification here.
#include <utility>
template <typename ...T>
struct Type {
// Needs to be uncommented if deduction guide is not specified
// Type(T&&...) { }
};
template <typename T, typename ...Ts>
struct Type<T, Ts...> : Type<Ts...> {
Type(T&& x, Ts&&... y) : Type<Ts...>(std::forward<Ts>(y)...) {
}
};
//template <typename ...T>
//Type(T&&...) -> Type<T...>;
int main() {
Type t{1,2.3,false};
}
|
Class template argument deduction uses a set of implicit guides synthesized from the primary template definition, plus any explicit deduction-guides declared for the class template.
The partial specialization of Type does not participate in the generation of implicit guides. Nor would any explicit (full) specialization, if there were any in your program.
If you do not declare an explicit deduction-guide, then you must have an appropriate constructor in the primary class template in order for a similar guide to be generated implicitly.1
1 There are some exceptions; certain guides are generated implicitly even in the absence of a constructor. I won't get into this here.
|
74,250,649
| 74,250,775
|
How to split strings and pass them to class vectors from file
|
For my university class in programming we have been working on Object Oriented Programming (OOP) and are currently working on a group project. The project is to create a cash register that holds items with their names, amounts, and prices. As well as have a way to track the coins given by the user then determine the coin denomination. These are supposed to be done in different classes and involve objects.
My question is regarding the inventory manager that I am coding. The inventory manager is supposed to take the "data.txt" file and read it into the appropriate vectors. Currently I have a vector for the item name, price, amount, and then the itemList vector which holds a string of all 3 to print to the user for readability.
Here is a snippet from the data file:
20 1.99 Potato Chips
10 5.99 Ibuprofen
4 1.42 Candy
55 3.10 Coffee
12 3.25 Hummus
12 4.55 Guacamole
7 0.80 Baklava
45 1.50 Chocolate Chip Cookies
My question is, how do I split the line up so that I can pass the amount (first number) to the appropriate vector, pass the price to the appropriate vector, then pass the name to the appropriate vector. Essentially splitting each line into 3 parts. The part that is the most difficult to me is how the names of the items can be 1 word, 2 words, or even 3 words. Making it difficult to predict how many spaces the name will have, which caused a lot of my attempts to not work.
I found a working solution though I'm worried it's incorrect or inefficient and I'm curious in knowing the best way to do this. Thank you so much ahead of time and I will post the class definition and the method I'm working in down below.
< The stream object is inFile>
class inventoryManager
{
private:
double total, change, choice;
vector <int> amount;
vector <string> name;
vector <double> price;
vector <string> itemList;
public:
void fileOpen(fstream& inFile);
void fillInventory(fstream& inFile);
void printInventory(fstream& inFile);
};
void inventoryManager::fillInventory(fstream& inFile)
{
string temp;
string a, b;
while (!inFile.eof())
{
inFile >> a >> b;
amount.push_back(stoi(a));
price.push_back(stod(b));
getline(inFile, temp);
name.push_back(temp);
itemList.push_back(temp);
}
}
Attempted: I tried using the find() function to find each space then use the index to print the estimated indices to each side of the white space to the vectors. Though it was extremely messy and wouldn't work if a different data file was inputted. The idea of the program is to be able to work if a different file is put in, similar format but the amount could be 100, and prices could be more digits. I also tried editing the file directly by adding new lines for each space but ran into the item names with multiple spaces and ultimately didn't work.
|
You are trying to do too many new things at once. Break the problem into pieces and solve them separately, then combine them. To break the string into pieces you can use find and substr, you just have to be careful and debug until you're sure it's working perfectly:
string s = "45 1.50 Chocolate Chip Cookies";
cout << "s: " << s << endl; // for debugging
size_t first = s.find(' ');
cout << first << endl; // for debugging
string amountString = s.substr(0, first);
cout << amountString << "X" << endl; // for debugging
size_t second = s.find(' ', first+1);
cout << second << endl; // for debugging
string priceString = s.substr(first+1,second-first-1);
cout << priceString << "X" << endl; // for debugging
string nameString = s.substr(second+1);
cout << nameString << "X" << endl; // for debugging
The purpose of those X's is to be certain that the substring has no trailing space.
Once you have tackled the problem this way (and handed the result in for a grade), you can advance to tools like stringstream, and you won't have to deal with this grubby index arithmetic any more.
|
74,250,767
| 74,293,147
|
CMake output configured with vscode not running in terminal
|
Okay so I am new to using CMake and I was trying to get it to work in vscode. I am using the extension CMake Tools to run the build and configuration. I'm running a basic hello world program that writes an output as well to test everything out and what happens is when the executable produced gets run from the terminal it does not produce any output.
What I am expecting to happen is when I do the configuration and building with the extension it produces an output file that when run from the terminal says hello world and writes a example file. What actually happens is it does not output anything at all when run from the terminal but when run through the extension it does give an output of text in the terminal the extension opens up and it produce a file.
What I've tried so far is compiling the program from g++ and it works as expected running from the terminal, I've created the cmake project and built it manually from the terminal and it works as expected running from the terminal, and I have finally created the cmake project manually from the terminal and built it inside of vscode using the build task and it works as expected running from the terminal. The only time it seems to not work as I would expect is when the vscode extension configures the project automatically. In all of the cmake projects it was built in release mode.
One thing I've notice about the executable that get outputted is the ones that function when called by the regular terminal is they are a larger file size then the ones that do not output so I assume that some setting in the automatic configuration is causing this which is probably the problem just I'm not sure what.
The code for the cpp program is
#include <iostream>
#include <fstream>
int main(int argc, char const *argv[])
{
std::ofstream myfile;
myfile.open ("example.txt");
myfile << "Writing this to a file.\n";
myfile.close();
std::cout<<"hello world"<<'\n';
return 0;
}
The cmakelist.txt is this
cmake_minimum_required(VERSION 3.0.0)
project(abc123 VERSION 0.1.0)
include(CTest)
enable_testing()
add_executable(abc123 main.cpp)
set(CPACK_PROJECT_NAME ${PROJECT_NAME})
set(CPACK_PROJECT_VERSION ${PROJECT_VERSION})
include(CPack)
I am also using MinGW for the gcc compiler and cmake
So in summary is there a way to get the auto configuration of the extension to produce a output file that can be run from anywhere on my system not just through vscode extension
Thanks
Edit:
I tried the same thing on linux and the cmake extension works as expected it seems like this is only a problem on windows
|
Okay so switching from MinGW to Mingw-w64 with MSYS2 everything works as expected to work. Not sure what caused this problem in MinGW though.
|
74,250,832
| 74,250,907
|
how can i slice a V_BSTR in c++?
|
Am working on active Directory and I need to retrieve the userAccountControl attribute and check for an option so I used get method as follows
VARIANT var;
VariantInit(&var);
hr = pUsr->Get(CComBSTR("userAccountControl"), &var);
and stored it in an variant then converted it to V_BSTR to print the result..
if (SUCCEEDED(hr)) {
std::cout << V_BSTR(&var) << std::endl;
}
the result is 0000000000010200
Now I need to slice it so that i get only the last 3 digits.
Is there a way to slice a bstr or to which datatype I need to convert? please help me with the conversion too
|
You can convert the BSTR to std::wstring,
and then use std::wstring::substring to extract the last 3 characters:
#include <atlbase.h>
#include <assert.h>
#include <string>
#include <iostream>
int main() {
VARIANT var;
VariantInit(&var);
var.bstrVal = CComBSTR("0000000000010200");
// Convert BSTR to std::wstring:
std::wstring wstr(var.bstrVal, SysStringLen(var.bstrVal));
assert(wstr.length() >= 3);
// Extract last 3 characters:
std::wstring wstrLast3chars = wstr.substr(wstr.length() - 3, 3);
std::wcout << wstrLast3chars << std::endl;
}
Output:
200
|
74,250,866
| 74,251,140
|
Find digit in an integer at user requested position
|
I am trying to take a user entered integer (positive or negative) and let them pick a digit position and output the digit at that position.
int findDigAtPos(int number)
{
int position;
int result;
cout << "Enter digit position between 1 and " << std::to_string(number).length() << endl;
cin >> position;
while (position < 0 || position > std::to_string(number).length())
{
cout << "Enter digit position between 1 and "<< std::to_string(number).length()<<endl;
cin >> position;
}
if (std::to_string(number).length() == 1)
cout << "Only one digit" << endl;
number = number / pow(10, (position - 1.0));
result = number % 10;
return result;
}
This is the code I currently have for the function. However, it outputs the number in reverse. How do I correct the digit position? I thought it didn't even function correctly until noticing it's in reverse order.
|
First, note that you shouldn't be using the pow function when working with integers, because it returns a double result, which can cause problems due to unexpected truncation of the result.
But, if you insist on using it, then you need to remember that the power of 10 by which to divide will decrease as the digit position increases: i.e., the position is given with the leftmost (most significant) digit in position 1. Thus, that power of 10 will be total number of digits minus the position:
number = number / pow(10, (std::to_string(number).length() - position));
result = number % 10;
The safer method (not using pow) would be a small loop, like this:
for (int d = std::to_string(number).length() - position; d > 0; --d) number /= 10;
result = number % 10;
However, as you're already converting the passed number to a string, then why not just save that string and extract the digit from it – at position-1, because array (and string) indexes in C++ are zero-based:
int findDigAtPos(int number)
{
int position;
std::string str = std::to_string(std::abs(number)); // If -ve, remove sign!
int ndig = str.length();
std::cout << "Enter digit position between 1 and " << ndig << "\n";
std::cin >> position;
while (position < 0 || position > ndig) {
std::cout << "Enter digit position between 1 and " << ndig << "\n";
std::cin >> position;
}
return str.at(position-1) - '0';
}
Note that the codes (ASCII or otherwise) for the numeric digits, 0 thru 9 are guaranteed by the C++ standard to be contiguous and in order, so subtracting the value of the digit 0 will 'convert' any digit to its actual, numerical value.
|
74,251,092
| 74,251,401
|
Get clicked item on QTreeWidget
|
class TreeWidget : public QTreeWidget
{
Q_OBJECT
public:
TreeWidget(QWidget* parent = 0) : QTreeWidget(parent)
{
connect(this, &QTreeWidget::itemClicked, this, &TreeWidget::onItemClicked);
}
public slots:
void onItemClicked(QTreeWidgetItem* item, int column)
{
auto _item = dynamic_cast<QTreeWidgetItem*>(item);
qDebug() << "item: " << item << " _item: << _item;
// if (item == 0) { ... }
// elseif ...
}
}
I'm confused about how to get the clicked item of a QTreeWidget, in my example, qDebug() prints something like item: 0x2a64e3edfb0 what is the 'proper' way to read it?
I'm trying to perform different actions according to the item clicked.
|
In your code item: 0x2a64e3edfb0 is your object and 0x2a64e3edfb0 is your object's address in memory.
But your QTreeWidgetItem object has functions and properties like its text and you can get it like this:
void MainWindow::on_treeWidget_itemClicked(QTreeWidgetItem *item, int column)
{
qDebug() << "item: " << item <<","<<item->text(column);
}
For get the Index of QTreeWidgetItem you should get it from your parent means QTreeWidget
like this :
void MainWindow::on_treeWidget_itemClicked(QTreeWidgetItem *item, int column)
{
qDebug() << "item: " <<ui->treeWidget->indexOfTopLevelItem(item);
}
the output :
If item is a child of a branch then you can use this:
qDebug() << "item: " <<ui->treeWidget->indexFromItem(item,column).row();
Look at QTreeWidget::indexFromItem function.
|
74,251,265
| 74,251,324
|
what is V_VT and what does it return?
|
Am working in Active Directory and i get to know that V_VT is used to get the type of the variant but when i used it and print it it shows 3 and what exactly does that mean? where can i fnd the documentation about it?
VARIANT var;
VariantInit(&var);
hr = pUsr->Get(CComBSTR("userAccountControl"), &var);
if (SUCCEEDED(hr)) {
std::cout << V_VT(&var) << std::endl;
VariantClear(&var);
}
|
I think you should not be using VT_V at all (macros are not typesafe)
V_VT(x) provides (as documentation states) a "convenient shorthand" to access VARIANT fields. E.g.V_VT(&vtXmlSource) = VT_UNKNOWN; is equivalent to
vtXmlSource.vt = VT_UNKNOWN;
BSTR's are wide character strings (with different allocator/deallocator), https://learn.microsoft.com/en-us/previous-versions/windows/desktop/automat/bstr. But you can use them to construct std::wstring from them.
Since you use ATL::CComBSTR, also consider using ATL::CComVariant to avoid memory leaks (manually calling ::VariantClear).
#include <cassert>
#include <atlbase.h>
#include <iostream>
#include <string>
int main()
{
ATL::CComVariant var{ L"hello world!" };
assert(var.vt == VT_BSTR);
ATL::CComBSTR bstr{ var.bstrVal };
std::wstring str{ var.bstrVal };
std::wcout << bstr.m_str << "\n";
std::wcout << str << "\n";
return 0;
}
|
74,253,209
| 74,253,249
|
If the ownership is moved from one unique pointer into another, is calling the method of the earlier object an undefined behavior?
|
If the ownership is moved from one unique pointer into another, why can I call the method of that object, from which the ownership was transferred? In the example: why can I call the foo method of p1 pointer? Isn't this an undefined behavior?
#include <iostream>
#include <memory>
class Car {
public:
int data{42};
void foo() {
std::cout << "Foo method is called." << std::endl;
}
~Car() {
std::cout << "Destructor is called." << std::endl;
}
};
int main() {
std::unique_ptr<Car> p1(new Car());
std::unique_ptr<Car> p2 = std::move(p1);
p1->foo(); // I can call this method, but isn't this an undefined behavior?
std::cout << p1->data << std::endl; // this give me segmentation error.
return 0;
}
Console Output:
Foo method is called.
Process finished with exit code -1073741819 (0xC0000005)
|
You can make that call because it's valid syntax. That doesn't mean it isn't also undefined behavior.
Take a look at the unique_ptr move constructor (the 5th one on that page). It says:
stores the null pointer in u
This means, in your example, after the move, p1 stores a null pointer. So, dereferencing it in any way is like dereferencing a null pointer, which is undefined behavior.
Undefined behavior doesn't mean the program will crash: in the case of p1->foo();, foo does not use anything from the object and might as well be a static or free function. It isn't accessing any memory that would cause a segmentation fault.
In other words, you crafted a situation where, by chance, a case of undefined behavior is the behavior you expect. This might not be true for someone else running the exact same code, or when running it on another platform, or compiled by another compiler, and so on.
|
74,253,748
| 74,255,031
|
How to get a userAccountControl Attribute in active directory
|
Am working on active directory and i need to get a value of a checkbox in userAccountControl
and want to know if the check box is checked or not..
I tried to get the value of it by using the code
VARIANT var;
VariantInit(&var);
hr = pUsr->Get(CComBSTR("userAccountControl"), &var);
if (SUCCEEDED(hr)) {
std::cout << V_I4(&var) << std::endl;
}
and I got the output 512. The problem is even if the checkbox is checked or unchecked it has the same value 512. it changes for other checkboxes but shows the same value for this option i need a way to find if the checkbox is true or not
|
You want to look at the pwdLastSet attribute. The documentation for that says:
If this value is set to 0 and the User-Account-Control attribute does not contain the UF_DONT_EXPIRE_PASSWD flag, then the user must set the password at the next logon.
So two things must be true to force a password change on next logon:
The pwdLastSet attribute is 0.
The account is not set to never expire the password. In C++ I think that would look something like this:
!(V_I4(&var) & DONT_EXPIRE_PASSWORD)
|
74,253,913
| 74,254,068
|
How can I tell rust two traits have the same type?
|
I'm trying implement C++ adjacent_difference algo for rust iterator. std::adjacent_different by default uses the current value in a container subtracts its previous value, something like, if we have [5, 7, 16], then after adjacent_difference, we get [5, 7-5, 16-7]
I try to do it in rust as an iterator extension, but found out that Iterator::Item - Iterator::Item is std::ops::Sub::Output, to make the Iterator::next return type happy, I have to do some weird things, like Some(iter_item_value - <I as Iterator>::Item::default()) to get the correct type godbolt
struct AdjacentDifference<I>
where I: Iterator
{
prev: Option<I::Item>,
it: I,
}
impl<I> Iterator for AdjacentDifference<I>
where I: Iterator, <I as Iterator>::Item: std::ops::Sub + Default + Copy,
{
type Item = <<I as Iterator>::Item as std::ops::Sub>::Output;
fn next(&mut self) -> Option<Self::Item> {
match (self.it.next(), self.prev) {
(None, _) => None,
(Some(i), Some(p)) => {
let r = i - p;
self.prev = Some(i);
Some(r)
}
(Some(i), None) => {
self.prev = Some(i);
Some(i - <I as Iterator>::Item::default()) // <<- really?
}
}
}
}
trait AdjacentDifferenceExt: Iterator where Self: Sized,
{
fn adjacent_difference(self) -> AdjacentDifference<Self>
where <Self as Iterator>::Item: Copy,
{
AdjacentDifference {
prev: None,
it: self,
}
}
}
impl<I> AdjacentDifferenceExt for I where I: Iterator {}
It works, but I don't like this code very much, did I do this totally wrong? are there any better ways to achieve this?
|
There are multiple ways you can tackle this.
For example, one option is just to specify the subtraction has the same type as the element itself:
impl<I> Iterator for AdjacentDifference<I>
where
I: Iterator,
I::Item: std::ops::Sub<Output = I::Item> + Copy,
{
type Item = <<I as Iterator>::Item as std::ops::Sub>::Output;
fn next(&mut self) -> Option<Self::Item> {
match (self.it.next(), self.prev) {
(None, _) => None,
(Some(i), Some(p)) => {
let r = i - p;
self.prev = Some(i);
Some(r)
}
(Some(i), None) => {
self.prev = Some(i);
Some(i)
}
}
}
}
This restricts the type you can use (for example, you won't be able to use Instant - Instant = Duration), but for most types this will be OK.
Another option is to specify that the iterator element type can be converted into the subtraction type via From:
impl<I> Iterator for AdjacentDifference<I>
where
I: Iterator,
I::Item: std::ops::Sub + Copy,
<I::Item as std::ops::Sub>::Output: From<I::Item>,
{
type Item = <<I as Iterator>::Item as std::ops::Sub>::Output;
fn next(&mut self) -> Option<Self::Item> {
match (self.it.next(), self.prev) {
(None, _) => None,
(Some(i), Some(p)) => {
let r = i - p;
self.prev = Some(i);
Some(r)
}
(Some(i), None) => {
self.prev = Some(i);
Some(i.into())
}
}
}
}
This will implicitly work for the cases where the subtraction type is the element type thank to the blanket impl<T> From<T> for T.
|
74,254,176
| 74,255,044
|
sentinel node in cpp list does not work as expected
|
I am writing cpp-11 and want to create a list of list based on the data received. The data structure is std::list<char, std::list<char, int>>. The outer list stores a list of inner lists, and when several successive inner lists have the same label, then they should be grouped together, otherwise a new entry is created. Analogously, the inner list count the number of successive data entries that have the same label and stores the total count, otherwise create a new entry.
I use the following trick to avoid judging if the container is empty every time. By the beginning when a list is created, a sentinel node is inserted to the freshly created container. The sentinel node obtains a label that will never appear in the data received. So that every time a new data comes, I compare the data lable with the label of last entry, instead of inserting extra codes to judge if this is the first entry in the current list.
But it doesnot work as expected. It seems that the sentinel node is not inserted at all.
Here is the code.
#include <list>
#include <tuple>
#include <iostream>
using profileLoc = std::pair<char, size_t>;
using profileGrp = std::pair<char, std::list<profileLoc>>;
std::list<profileGrp> totalInfo;
void dump(bool isFinal=false){
static int dumpCount = 0;
if(isFinal) std::cout<<"Final ";
std::cout<<"dump "<< ++dumpCount<<"\n";
for(auto grp:totalInfo){
if(grp.first==-1) continue;
std::cout<<"grp label: "<<grp.first<<"\n";
for(auto range: grp.second){
if(std::get<0>(range)==0) continue;
std::cout<<"\ttype label: "<<std::get<0>(range)<<" count: "<<std::get<1>(range)<<"\n";
}
}
}
int main(){
profileGrp dummyGrp{-1,{{0,0}}};
totalInfo.push_back(dummyGrp);
using dataTy = std::tuple<int, char>;
std::list<dataTy> sampleData{
{'X', 'a'},
{'X', 'a'},
{'X', 'a'},
{'X', 'b'},
{'X', 'b'},
{'X', 'b'},
//{'Y', 'c'},
//{'Y', 'c'},
//{'Y', 'b'},
//{'Y', 'b'},
//{'Y', 'b'},
};
for(auto data:sampleData){
char grpNo;
char typeNo;
std::tie(grpNo, typeNo) = data;
std::cout<<"receiving data "<<grpNo<<" "<<typeNo<<"\n";
profileGrp& lastGrp = totalInfo.back();
if(lastGrp.first != grpNo){
std::list<profileLoc> dummyList{{0, 0}};
totalInfo.emplace_back(grpNo, dummyList);
lastGrp = totalInfo.back();
}
std::list<profileLoc>& locList = lastGrp.second;
if(std::get<0>(locList.back())!=typeNo){
locList.emplace_back(typeNo, 1);
} else {
size_t lastCount = std::get<1>(locList.back());
locList.pop_back();
locList.emplace_back(typeNo, lastCount+1);
}
//dump();
}
dump(true);
return 0;
}
The result I get is:
receiving data X a
receiving data X a
receiving data X a
receiving data X b
receiving data X b
receiving data X b
Final dump 1
grp label: X
type label: a count: 1
grp label: X
type label: a count: 2
type label: b count: 3
If you know how to fix the code please tell me. Thank you.
receiving data X a
receiving data X a
receiving data X a
receiving data X b
receiving data X b
receiving data X b
Final dump 1
grp label: X
type label: a count: 3
type label: b count: 3
|
Make it work first and then try to make it works better.
Here is a revision with sentinel nodes removed. Try to add it back and compare with your code.
#include <list>
#include <tuple>
#include <iostream>
using profileLoc = std::pair<char, size_t>;
using profileGrp = std::pair<char, std::list<profileLoc>>;
class MyInfo{
public:
MyInfo& AddEntry(char groupNo, char typeNo)
{
bool g_f = false;
for(auto & g : totalInfo)
{
if(g.first == groupNo){
bool l_f = false;
for(auto& loc: g.second)
{
if(loc.first == typeNo){
++loc.second;
l_f = true;
}
}
if( !l_f )
g.second.emplace_back(typeNo, 1);
g_f = true;
}
}
if( !g_f )
totalInfo.emplace_back(groupNo,std::list<profileLoc>{{typeNo, 1}} );
return *this;
}
#ifdef _DEBUG
void Dump(bool isFinal = false)
{
static int dumpCount = 0;
if(isFinal) std::cout<<"Final ";
std::cout<<"dump "<< ++dumpCount <<"\n";
for(const auto& grp:totalInfo){ // use reference if you don't want to make a copy
// here you don't need. the const modifier is a good practice
// to communicate to the compiler you have no
// intention to modify it in the following for-loop scope
//if(grp.first==-1) continue;
std::cout<<"grp label: "<<grp.first<<"\n";
for(const auto& range: grp.second){
if(std::get<0>(range)==0) continue;
std::cout<<"\ttype label: "<<std::get<0>(range)<<" count: "<<std::get<1>(range)<<"\n";
}
}
}
#else
void Dump(bool){}
#endif
private:
std::list<profileGrp> totalInfo;
};
int main(){
using dataTy = std::tuple<int, char>;
std::list<dataTy> sampleData{
{'X', 'a'},
{'X', 'a'},
{'X', 'a'},
{'X', 'b'},
{'X', 'b'},
{'X', 'b'},
{'Y', 'c'},
{'Y', 'c'},
{'Y', 'b'},
{'Y', 'b'},
{'Y', 'b'},
};
MyInfo info;
for(auto& data:sampleData){
auto& [grp,type] = data;
info.AddEntry(grp, type);
}
info.Dump(true);
return 0;
}
Above implementation doesn't rely on data comes in sorted. If that's a given, AddEntry() can be revised to take advantage of that:
MyInfo& AddEntry(char groupNo, char typeNo)
{
if( totalInfo.empty() || totalInfo.back().first != groupNo){
totalInfo.emplace_back(groupNo, std::list<profileLoc>{{typeNo, 1}});
return *this;
}
auto& _profLoc = totalInfo.back().second; // the std::list<profileLoc> we
// are going to do more processing on
if(_profLoc.empty() || _profLoc.back().first != typeNo )
_profLoc.emplace_back(typeNo, 1);
else
++_profLoc.back().second;
return *this;
}
Adding sentinels to save the if list is empty check should be piece of cake from here.
|
74,254,508
| 74,254,844
|
C++ Why can't I get into an if condition if the statement is true?
|
I made a for loop in the constructor that goes through XML elements of bricks and creates the brick depending on what the Id is.
I even checked typeid in case mId wasnt a char but it's const char*. The std::cout << "Made it in"; never triggers. Here is the code:
Brick::Brick(int rowSpacing, int columnSpacing, const char* Id, SDL_Renderer* renderer)
{
mDoc = new XMLDocument();
mDoc->LoadFile("Breakout.xml");
// Go through every "BrickType" element in the XML file
dataElement = mDoc->FirstChildElement("Level")->FirstChildElement("BrickTypes");
for (XMLElement* brickElement = dataElement->FirstChildElement("BrickType");
brickElement != NULL; brickElement = brickElement->NextSiblingElement())
{
std::cout << typeid(brickElement->FirstChildElement("Id")->GetText()).name();
std::cout << typeid(Id).name();
if (brickElement->FirstChildElement("Id")->GetText() == Id) {
std::cout << "Made it in";
mId = brickElement->FirstChildElement("Id")->GetText();
std::stringstream hp(brickElement->FirstChildElement("HitPoints")->GetText());
hp >> mHitPoints;
std::stringstream bs(brickElement->FirstChildElement("BreakScore")->GetText());
bs >> mBreakScore;
mTexture = IMG_LoadTexture(renderer, brickElement->FirstChildElement("Texture")->GetText());
}
}
mCurrentHp = mHitPoints;
mIsHit = false;
rect = { rowSpacing, columnSpacing, mBrickWidth, mBrickHeight };
}
Instead of the Id in if (brickElement->FirstChildElement("Id")->GetText() == Id) i tried hard coding "S" as one of the brick Ids is "S", like this if (brickElement->FirstChildElement("Id")->GetText() == "S"), but it still doesn't want to get into an if condition, I don't understand why.
|
C Strings are character arrays and not first-class datatypes in C++ (is C as it happens(. As such you cannot use comparison operators on them. Semantically you are comparing the value of two pointers, not the content of two strings.
To that end, you either need a C-String comparison:
if( strcmp(brickElement->FirstChildElement("Id")->GetText(),
Id) == 0 ) ...
or use the C++ std::string comparison:
if( std::string(brickElement->FirstChildElement("Id")->GetText()) ==
std::string( Id ) ) ...
The std::string class overloaded the == operator (and others) so that it compares the content. Constructing a std::string just to perform a comparison is somewhat heavyweight, but in other circumstances, you might do well to use std::string in preference to C-Strings in any case.
|
74,255,969
| 74,256,184
|
How to store cout in string and then output the string to console?
|
I am trying to store the output of my program in a file, and even though I know there are various much simpler methods, I want to solve the problem using strings since I would like to know the logic behind it.
So far, I understand the implementation:
std:: stringstream s;
s << "string";
and I know at some point I will have the following code
cout << s.str()
but how do I store my program output in the string stream without providing the string itself? Put another way, how do I redirect the cout statements within my program to a string?
|
If your goal is to redirect std::cout to a std::string, you can use the cout.rdbuf() method to give std::cout a different buffer to write to, such as the buffer of a std::ostringstream (or a std::ofstream, etc).
The above linked documentation provides the following example of exactly this:
#include <iostream>
#include <sstream>
int main() {
std::ostringstream local;
auto cout_buff = std::cout.rdbuf(); // save pointer to std::cout buffer
std::cout.rdbuf(local.rdbuf()); // substitute internal std::cout buffer with
// buffer of 'local' object
// now std::cout work with 'local' buffer
// you don't see this message
std::cout << "some message";
// go back to old buffer
std::cout.rdbuf(cout_buff);
// you will see this message
std::cout << "back to default buffer\n";
// print 'local' content
std::cout << "local content: " << local.str() << "\n";
}
|
74,256,005
| 74,256,574
|
Padded Structs (C++)
|
I have an 18 byte struct in C++. I want to read 18 bytes of data from a file straight into this struct. However, my C++ compiler pads the struct to be 20 bytes (4 byte aligned). This is relatively easy to get around for just my compiler alone but I would prefer to use a method that is more reliable cross-platform/cross-compiler.
This is the struct:
struct Test {
uint8_t a;
uint8_t b;
uint8_t c;
uint16_t d;
uint16_t e;
uint8_t f;
uint16_t g;
uint16_t h;
uint16_t i;
uint16_t j;
uint8_t k;
uint8_t l;
};
I could add bytes to the front of the struct to guarantee it to be 32 bytes which would be a valid alignment on most systems, however I don't know if that would actually work with how structs need their elements to be naturally aligned.
Any help on this would be great but I could always end up copying the bytes manually into the attributes .
|
You have several options, and as usual, you should choose whatever best fits your needs:
as stated before, don't read/write directly from/to memory, instead write each field separately (kind of how Java people would).
This is the, I think, most portable, but WAY slower than the later methods.
reorder the struct to match normal alignment (good practice anyway)
in your example:
struct Test {
uint8_t a;
uint8_t b;
uint8_t c;
uint8_t f; // moved
uint16_t d;
uint16_t e;
// uint8_t f;
uint16_t g;
uint16_t h;
uint16_t i;
uint16_t j;
uint8_t k;
uint8_t l;
uint16_t spare;
};
Note: it still have 2 byte padding, but not in the middle :)
use #pragma pack(push, 1) on the struct to tell the compiler to NOT align the bytes
Note: you may need to place multiple #pragma to support different compilers
#pragma pack(push, 1)
struct Test {
uint8_t a;
uint8_t b;
uint8_t c;
uint16_t d;
uint16_t e;
uint8_t f;
uint16_t g;
uint16_t h;
uint16_t i;
uint16_t j;
uint8_t k;
uint8_t l;
uint16_t spare;
};
#pragma pack(pop)
I'd like to add that proper alignment helps the CPU process faster, therefore, you don't want to force pack = 1 on all structs... only those intended to be transmitted or received via communication channel.
|
74,256,770
| 74,256,828
|
Is ostream& operator<< better practice in a class than using std::cout?
|
In a Linked List class, I created a display method to print out the Linked List as it is currently formulated like so:
void LinkedList::display() {
Node* curr = m_head;
while (curr) {
std::cout << curr->n_data << " -> ";
curr = curr->n_next;
}
std::cout << std::endl;
}
One of the TA's who graded the assignment left a comment:
Never do cout in a class file. Use ostream& operator<< instead.
Classes shouldn't care about where you are outputting to (file/console/etc).
We hadn't yet learned about operator overloading at the time of this assignment, but I'm still not really sure.
How would I actually implement and use the ostream& operator>> to print to the console? Why is this best practice?
|
The simplest solution in this case would be to add a std::ostream parameter to your display() method, eg:
void LinkedList::display(std::ostream &out) const {
Node* curr = m_head;
while (curr) {
out << curr->n_data << " -> ";
curr = curr->n_next;
}
out << std::endl;
}
LinkedList list;
...
list.display(std::cout);
Then, if down the line, you decide to overload operator<<, it can simply call display(), eg:
std::ostream& operator<<(std::ostream &out, const LinkedList &list) {
list.display(out);
return out;
}
LinkedList list;
...
std::cout << list;
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.