text
stringlengths 42
144
| audio
audioduration (s) 2.83
8
|
|---|---|
Open the web page you want to scrape in your browser. For this video I am going to scrape sgx
| |
You know 8 or 9 years ago when I transitioned from discretionary to quantitative trading
| |
response code in PyChimp's variable window here. A 200 response code means
| |
the request was successful. indicates client error
| |
indicates the request failed. Time to post process that FacedHTML data. Expanding the fetched data reveals a text
| |
field containing an HTML formatted string. Beautiful soup then parses and cleans
| |
this HTML using two lines of code. So pandas can process it with
| |
df=pd.read_html(html_data) Pandas can convert this HTML
| |
data into a neat data frame. Let's look at the special variable window again.
| |
Here we can see that our data frame df actually has 9 different data frames within it.
| |
If we expand it we can see that various data frames. Let's open the first data frame by clicking on View as data frame.
| |
This is the same data frame that we can see in table format on the website. And if we click on the second data frame
| |
I gradually realized that data is far more important than analytical skills.
| |
it corresponds to the second table on the hgx nifty webpage.
| |
If you want to fetch other indexes like the snp500 or laujones you can click on data frame
| |
number 6 and get the relevant data. You can use the same code for various other symbols.
| |
Since we need the hgx data we would use the first element in the df list.
| |
Now if we open this hgx data frame a datetime index is missing. So we will create a column called dat.
| |
Set it as the index and then export it as a CSV file. The remaining part of the code is optional.
| |
Here I converted the hgx data frame into a human readable text format. Then used gtts and pygame
| |
library to read aloud that text. Most examples for gtts use reading and writing files to the disk.
| |
But here we wrote the files in virtual memory to speed up the process. And if you have noticed I am using a python environment
| |
installed on raspberry pi via sssg in pycharm. If you want to know more about how to do that I have made a video on it.
| |
I mean look at this way. There is an upper limit to how much analysis you can perform on a single data source.
| |
Check that out. I will put the link in the description. If I execute these lines you will hear the tts engine.
| |
I have connected the speaker to raspberry pi's auxiliary input. Sound quality might sound different
| |
because my microphone is far away. Listen.
| |
So we will conclude our video with this. Our job is done. We will put this script in a linux cron tab and if you are on
| |
windows you can put the script in windows task scheduler. In the next video I will show you how I modified the investing.py
| |
library using the same technique explained in this video. That library is down because of a cloudflare firewall.
| |
We will use our adjusted script to grab the economic event data. After that we will feed the human readable version into
| |
google's text to speech engine like we did here. So take care and happy coding. See you in the next one.
| |
But imagine finding an underutilized data source and then applying similar advanced analytical methods and
| |
seeking that crucial edge. Furthermore data has become crucial for machine learning or AI.
| |
Therefore innovative ways to find data where others aren't looking are increasingly important.
| |
We can't rely on traditional market data vendors for the same old generic data everyone else uses.
| |
This is where web scraping becomes valuable. So let's get into a practical example. Our first web scraping
| |
script will fetch SGX Nifty data. So this is the website we are going to scrape the data from.
| |
The site uses other symbols so you can easily modify a small
| |
nifty data as an example and I am using Edge a chromium based browser.
| |
part of the code to use them instead. The basic structure of Python code for web scraping is
| |
straightforward and largely consistent. The real trick lies in obtaining the
| |
necessary URLs and headers so your script mimics like a web browser.
| |
Please consider the following disclaimer before proceeding. One crucial thing to remember is the
| |
legality of web scraping. It's generally acceptable to scrape
| |
openly available data but don't redistribute commercially or collect personal information like
| |
email IDs or user names. I mean there is a still grey area but avoid overloading the target servers.
| |
This can lead to your IP being banned. Keeping these points
| |
If you want to remember only one thing from this video then remember this. You simply have to right click on the
| |
webpage you want to scrape the data from and select inspect element from the popup menu. That's all.
| |
So the method is virtually identical to Google Chrome. Simply press Ctrl plus Shift plus I or
| |
Alternatively use the keyboard shortcut Ctrl Shift I and this applies to all the chromium
| |
based The process is very similar in other browsers with only minor
| |
visual differences in the inspect element window. At first glance this might seem overwhelming but don't
| |
worry just ignore the clutter and focus on the network tab. It looks like a wifi signal.
| |
If the window fails crowded click the three dots then the icon that says unlock into
| |
separate window to open it in its own space. Refresh your browser.
| |
The network tab data will then populate. You can see this by clicking through the tabs.
| |
Finding the correct relative URL requires some detective work. There is no single hard rule.
| |
It's kinda an art honestly. But there are few tricks using them. You can streamline your investigation work.
| |
So let's get into the tricks. Focus on the type column. Your target is to find a text document
| |
right click and select inspect. A window will pop up. Hi guys. As you know for the past few weeks I have
| |
that can be in any format like table format.
| |
The page for this website I told a fresh use after a few minutes. If it isn't for your targeted website
| |
don't worry just look for the rows in type column. If it says document
| |
you have likely found it. Click on it. Then click on response tab.
| |
This shows the data fetched by the website and it uses its own URL internally.
| |
Examining the response details will confirm whether it is the correct data. In this case it looks like stock market data.
| |
To find the magic URL itself click the headers tab. Hears tab is crucial for python request.
| |
It provides data points to help the python data request as if it's coming from a browser.
| |
The request URL is what you need here. Right click and copy it.
| |
By the way for this video we don't need other header information. But at some point in your web scraping journey you might need it.
| |
been working with Raspberry Pi. I have created a dedicated series on this and posted a few videos
| |
Finding the relative URL was easy in this instance but that's not always the case. If your page doesn't auto refresh or if
| |
you can't locate the relevant data especially on a large site it can get difficult.
| |
These sites often contain a lot of data in various formats. JSON
| |
Requiring significant scrolling to find the right document type. Instead of clicking through numerous tabs
| |
the quickest and easiest method is usually to check the Fetch XHR tab first.
| |
You will likely find the relative URL there. If not try the doc tab. The URL is almost
| |
always in one of these two. You can safely ignore
| |
Their names clearly indicate they wouldn't contain the data you need. The media tab might be helpful if you want to fetch video
| |
or audio from the site. So that's less relevant to quantitative trading but maybe relevant to Quant traders you know.
| |
What I mean? If the fetch and doc tabs don't yield results the WS short for WebSocket section is the
| |
on how to get started. Now the initial part of the setup is complete and it is time to start writing our python
| |
next place to look. This involves a slightly more complex process.
| |
As the data is streamed in binary or string format depend on the website.
| |
This section is useful for scraping tick data or interacting with the browser via WebSockets.
| |
Actually many discount brokers' APIs use this method. brokers' APIs use this method. I might make a future video on fetching or interacting with
| |
WebSocket in Python but that's a steeper learning curve for beginners. Master this one first and then we will
| |
get into WebSockets later. Ok our data exploration often the trickiest and arguably the
| |
most important part is complete. The more you practice the more familiar you become.
| |
In my opinion creating your own web scraper is better than relying on Python libraries
| |
for fetching the market data. I learned this the hard way your code built with such
| |
libraries often breaks after a few months or years when websites update their user interface or underline structure or underline structure
| |
scripts for algo trading. This video is operating system agnostic. No raspberry pi is needed.
| |
or underline structure and is often happens far more. The rise of large language models has made data protection
| |
incredibly difficult. Companies are realizing their data's value and are working to protect it from those who
| |
would use it commercially. OpenAI this is understandable as it's unfair as I mentioned earlier in the disclaimer.
| |
So if we hit the wall and the website blocks our request using Cloudflare usually. Luckily there are a few Python libraries
| |
as a built in replacement for traditional 'requests' library to get past that firewall.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.