Compare commits

...

28 Commits
0.7.0 ... 0.8.0

Author SHA1 Message Date
f8eb5ab416 Break after successful response 2018-10-01 20:02:14 -05:00
ae2850215f Fix method for detecting valid info resposne 2018-10-01 19:55:47 -05:00
d418f50576 Make geo-bypass more robust 2018-10-01 19:01:44 -05:00
8c04768ef8 Add support for geo-bypass in '/videoplayback' 2018-09-30 20:26:28 -05:00
a718d5543d Add 'lang' and 'tlang' to '/api/v1/captions' 2018-09-30 10:13:07 -05:00
20130db556 Add mixes 2018-09-29 10:59:11 -05:00
66f3ab0663 Update README 2018-09-29 10:11:21 -05:00
1de7c0caf9 Merge pull request #186 from flourgaz/feature/docker-compose
Add basic docker-compose cluster
2018-09-29 10:04:31 -05:00
7d35b6e44f Add rel="noopener" to target="_blank" links 2018-09-29 09:56:37 -05:00
71a99542fe basic docker-compose cluster 2018-09-29 13:30:56 +02:00
8530c1f4ec Fix typo 2018-09-28 19:44:16 -05:00
29a6291957 Show info instead of empty playlist when possible 2018-09-28 09:54:45 -05:00
25ba5bda62 Fix encoding of playlist index 2018-09-28 09:54:01 -05:00
477c84deb1 Don't deliver new notifications for YouTube Red videos 2018-09-28 09:23:28 -05:00
c2f7d3d41c Add handling for specific genre channels 2018-09-27 17:11:19 -05:00
b0b5e3e982 Escape search queries 2018-09-27 17:02:59 -05:00
4fb275ec6e Get more video information when possible 2018-09-26 19:47:06 -05:00
f99b2cdf01 Add support for proxying comments 2018-09-26 18:44:37 -05:00
5d7bd9af0f Add host language for comments 2018-09-26 10:33:08 -05:00
aa819a189e Use alternate source for proxies 2018-09-25 21:07:18 -05:00
2e65997447 Fix geo-bypass threads 2018-09-25 18:16:07 -05:00
3e3de1890a Overhaul geo-bypass 2018-09-25 17:56:59 -05:00
5b5d69a33b Add host language to YouTube requests 2018-09-25 17:55:32 -05:00
1289065151 Add host language to fetch_video 2018-09-25 17:42:17 -05:00
21a8df42dd Add fix for short playlist descriptions 2018-09-25 10:28:57 -05:00
74b285d0f7 Add author thumbnails to playlist endpoint 2018-09-25 10:28:40 -05:00
c2e72439f5 Don't add anchor for empty genre URL 2018-09-25 10:10:25 -05:00
87498ae777 Update CHANGELOG 2018-09-25 09:55:14 -05:00
22 changed files with 675 additions and 109 deletions

View File

@ -1,3 +1,24 @@
# 0.7.0 (2018-09-25)
## Week 7: 1080p and Search Types
Hello again everyone! I've got quite a couple announcements this week:
Experimental 1080p support has been added with [`b3ca392`](https://github.com/omarroth/invidious/b3ca392)2a9073b4abb0d7fde58a3e6098668f53e, and can be enabled by going to preferences and changing `preferred video quality` to `dash`. You can find more details [here](https://github.com/omarroth/invidious/issues/34#issuecomment-424171888). Currently quality and speed controls have not yet been integrated into the player, but I'd still appreciate feedback, mainly on any issues with buffering or DASH playback. I hope to integrate 1080p support into the player and push support site-wide in the coming weeks.
You can now filter content types in search with the `type:TYPE` filter. Supported content types are `playlist`, `channel`, and `video`. More info is available [here](https://github.com/omarroth/invidious/issues/126#issuecomment-423823148). I think this is quite an improvement in usability and I hope others find the same.
A [CHANGELOG](https://github.com/omarroth/invidious/blob/master/CHANGELOG.md) has been added to the repository, so folks will now receive a copy of all these updates when cloning. I think this is an improvement in hosting the project, as it is no longer tied to the `/releases` tab on Github or the posts on Patreon.
Recently, users have been reporting 504s when attempting to access their subscriptions, which is tracked in [#173](https://github.com/omarroth/invidious/issues/173). This is most likely caused by an uptick in usage, which I am absolutely grateful for, but unfortunately has resulted in an increase in costs for hosting the site, which is why I will be bumping my goal on Patreon from $60 to $80. I would appreciate any feedback on how subscriptions could be improved.
Other minor improvements include:
- Additional regions added to bypass geo-block with [`9a78523`](https://github.com/omarroth/invidious/9a78523)41d9d67b6bddd8a9836c1b71c124c3614
- Fix for playlists containing less than 100 videos (previously shown as empty) with [`35ac887`](https://github.com/omarroth/invidious/35ac887)13320a970e3a87a26249c2a18a709f020
- Fix for `published` date for Reddit comments (previously showing negative seconds) with [`6e09202`](https://github.com/omarroth/invidious/6e09202)6d29eccc3e3adf02be138fddec2354027
Thank you everyone for your support!
# 0.6.0 (2018-09-18)
## Week 6: Filters and Thumbnails

View File

@ -28,6 +28,29 @@ BCH: qq4ptclkzej5eza6a50et5ggc58hxsq5aylqut2npk
## Installation
### Docker:
#### Build and start cluster:
```bash
$ docker-compose up
```
And visit `localhost:3000` in your browser.
#### Rebuild cluster:
```bash
$ docker-compose build
```
#### Delete data and rebuild:
```bash
$ docker volume rm invidious_postgresdata
$ docker-compose build
```
### Installing [Crystal](https://github.com/crystal-lang/crystal):
#### On Arch:
@ -74,8 +97,21 @@ $ sudo pacman -S imagemagick librsvg
## Usage:
```bash
$ crystal build src/invidious.cr
$ ./invidious
$ crystal build src/invidious.cr --release
$ ./invidious -h
Usage: invidious [arguments]
-b HOST, --bind HOST Host to bind (defaults to 0.0.0.0)
-p PORT, --port PORT Port to listen for connections (defaults to 3000)
-s, --ssl Enables SSL
--ssl-key-file FILE SSL key file
--ssl-cert-file FILE SSL certificate file
-h, --help Shows this help
-t THREADS, --crawl-threads=THREADS
Number of threads for crawling (default: 1)
-c THREADS, --channel-threads=THREADS
Number of threads for refreshing channels (default: 1)
-v THREADS, --video-threads=THREADS
Number of threads for refreshing videos (default: 1)
```
Or for development:

21
docker-compose.yml Normal file
View File

@ -0,0 +1,21 @@
version: '3'
services:
postgres:
build:
context: .
dockerfile: docker/Dockerfile.postgres
restart: unless-stopped
volumes:
- postgresdata:/var/lib/postgresql/data
invidious:
build:
context: .
dockerfile: docker/Dockerfile
restart: unless-stopped
ports:
- "3000:3000"
depends_on:
- postgres
volumes:
postgresdata:

15
docker/Dockerfile Normal file
View File

@ -0,0 +1,15 @@
FROM archlinux/base
RUN pacman -Sy --noconfirm shards crystal imagemagick librsvg \
which pkgconf gcc ttf-liberation
# base-devel contains many other basic packages, that are normally assumed to already exist on a clean arch system
ADD . /invidious
WORKDIR /invidious
RUN sed -i 's/host: localhost/host: postgres/' config/config.yml && \
shards && \
crystal build src/invidious.cr
CMD [ "/invidious/invidious" ]

View File

@ -0,0 +1,10 @@
FROM postgres:10
ENV POSTGRES_USER postgres
ADD ./setup.sh /setup.sh
ADD ./config/sql /config/sql
ADD ./docker/entrypoint.postgres.sh /entrypoint.sh
ENTRYPOINT [ "/entrypoint.sh" ]
CMD [ "postgres" ]

19
docker/entrypoint.postgres.sh Executable file
View File

@ -0,0 +1,19 @@
#!/usr/bin/env bash
CMD="$@"
if [ ! -f /var/lib/postgresql/data/setupFinished ]; then
echo "### first run - setting up invidious database"
/usr/local/bin/docker-entrypoint.sh postgres &
sleep 10
until runuser -l postgres -c 'pg_isready' 2>/dev/null; do
>&2 echo "### Postgres is unavailable - waiting"
sleep 5
done
>&2 echo "### importing table schemas"
su postgres -c "/setup.sh" && touch /var/lib/postgresql/data/setupFinished
echo "### invidious database setup finished"
exit
fi
echo "running postgres /usr/local/bin/docker-entrypoint.sh $CMD"
exec /usr/local/bin/docker-entrypoint.sh $CMD

View File

@ -1,7 +1,8 @@
#!/bin/bash
createdb invidious
createuser kemal
#createuser kemal
psql -c "CREATE USER kemal WITH PASSWORD 'kemal';"
psql invidious < config/sql/channels.sql
psql invidious < config/sql/videos.sql
psql invidious < config/sql/channel_videos.sql

View File

@ -105,6 +105,15 @@ spawn do
end
end
proxies = {} of String => Array({ip: String, port: Int32})
spawn do
find_working_proxies(BYPASS_REGIONS) do |region, list|
if !list.empty?
proxies[region] = list
end
end
end
before_all do |env|
env.response.headers["X-XSS-Protection"] = "1; mode=block;"
env.response.headers["X-Content-Type-Options"] = "nosniff"
@ -225,7 +234,7 @@ get "/watch" do |env|
end
begin
video = get_video(id, PG_DB)
video = get_video(id, PG_DB, proxies)
rescue ex
error_message = ex.message
STDOUT << id << " : " << ex.message << "\n"
@ -325,7 +334,7 @@ get "/embed/:id" do |env|
params = process_video_params(env.params.query, nil)
begin
video = get_video(id, PG_DB)
video = get_video(id, PG_DB, proxies)
rescue ex
error_message = ex.message
next templated "error"
@ -381,6 +390,7 @@ get "/embed/:id" do |env|
end
# Playlists
get "/playlist" do |env|
plid = env.params.query["list"]?
if !plid
@ -392,15 +402,39 @@ get "/playlist" do |env|
begin
playlist = fetch_playlist(plid)
videos = fetch_playlist_videos(plid, page, playlist.video_count)
rescue ex
error_message = ex.message
next templated "error"
end
begin
videos = fetch_playlist_videos(plid, page, playlist.video_count)
rescue ex
videos = [] of PlaylistVideo
end
templated "playlist"
end
get "/mix" do |env|
rdid = env.params.query["list"]?
if !rdid
next env.redirect "/"
end
continuation = env.params.query["continuation"]?
continuation ||= rdid.lchop("RD")
begin
mix = fetch_mix(rdid, continuation)
rescue ex
error_message = ex.message
next templated "error"
end
templated "mix"
end
# Search
get "/results" do |env|
@ -1718,11 +1752,13 @@ end
# API Endpoints
get "/api/v1/captions/:id" do |env|
env.response.content_type = "application/json"
id = env.params.url["id"]
client = make_client(YT_URL)
begin
video = get_video(id, PG_DB)
video = get_video(id, PG_DB, proxies)
rescue ex
halt env, status_code: 403
end
@ -1730,9 +1766,10 @@ get "/api/v1/captions/:id" do |env|
captions = video.captions
label = env.params.query["label"]?
if !label
env.response.content_type = "application/json"
lang = env.params.query["lang"]?
tlang = env.params.query["tlang"]?
if !label && !lang
response = JSON.build do |json|
json.object do
json.field "captions" do
@ -1752,22 +1789,27 @@ get "/api/v1/captions/:id" do |env|
next response
end
env.response.content_type = "text/vtt"
caption = captions.select { |caption| caption.name.simpleText == label }
env.response.content_type = "text/vtt"
if lang
caption = captions.select { |caption| caption.languageCode == lang }
end
if caption.empty?
halt env, status_code: 403
halt env, status_code: 404
else
caption = caption[0]
end
caption_xml = client.get(caption.baseUrl).body
caption_xml = client.get(caption.baseUrl + "&tlang=#{tlang}").body
caption_xml = XML.parse(caption_xml)
webvtt = <<-END_VTT
WEBVTT
Kind: captions
Language: #{caption.languageCode}
Language: #{tlang || caption.languageCode}
END_VTT
@ -1806,6 +1848,8 @@ get "/api/v1/captions/:id" do |env|
end
get "/api/v1/comments/:id" do |env|
env.response.content_type = "application/json"
id = env.params.url["id"]
source = env.params.query["source"]?
@ -1816,26 +1860,63 @@ get "/api/v1/comments/:id" do |env|
if source == "youtube"
client = make_client(YT_URL)
html = client.get("/watch?v=#{id}&bpctr=#{Time.new.epoch + 2000}&gl=US&hl=en&disable_polymer=1")
headers = HTTP::Headers.new
html = client.get("/watch?v=#{id}&bpctr=#{Time.new.epoch + 2000}&disable_polymer=1")
headers["cookie"] = html.cookies.add_request_headers(headers)["cookie"]
headers["content-type"] = "application/x-www-form-urlencoded"
headers["x-client-data"] = "CIi2yQEIpbbJAQipncoBCNedygEIqKPKAQ=="
headers["x-spf-previous"] = "https://www.youtube.com/watch?v=#{id}"
headers["x-spf-referer"] = "https://www.youtube.com/watch?v=#{id}"
headers["x-youtube-client-name"] = "1"
headers["x-youtube-client-version"] = "2.20180719"
body = html.body
session_token = body.match(/'XSRF_TOKEN': "(?<session_token>[A-Za-z0-9\_\-\=]+)"/).not_nil!["session_token"]
itct = body.match(/itct=(?<itct>[^"]+)"/).not_nil!["itct"]
ctoken = body.match(/'COMMENTS_TOKEN': "(?<ctoken>[^"]+)"/)
if !ctoken
env.response.content_type = "application/json"
if body.match(/<meta itemprop="regionsAllowed" content="">/)
bypass_channel = Channel({String, HTTPClient, HTTP::Headers} | Nil).new
proxies.each do |region, list|
spawn do
list.each do |proxy|
begin
proxy_client = HTTPClient.new(YT_URL)
proxy_client.read_timeout = 10.seconds
proxy_client.connect_timeout = 10.seconds
proxy = list.sample(1)[0]
proxy = HTTPProxy.new(proxy_host: proxy[:ip], proxy_port: proxy[:port])
proxy_client.set_proxy(proxy)
proxy_html = proxy_client.get("/watch?v=#{id}&bpctr=#{Time.new.epoch + 2000}&gl=US&hl=en&disable_polymer=1")
proxy_headers = HTTP::Headers.new
proxy_headers["cookie"] = proxy_html.cookies.add_request_headers(headers)["cookie"]
proxy_html = proxy_html.body
if proxy_html.match(/<meta itemprop="regionsAllowed" content="">/)
bypass_channel.send(nil)
else
bypass_channel.send({proxy_html, proxy_client, proxy_headers})
end
break
rescue ex
end
end
end
end
proxies.size.times do
response = bypass_channel.receive
if response
session_token = response[0].match(/'XSRF_TOKEN': "(?<session_token>[A-Za-z0-9\_\-\=]+)"/).not_nil!["session_token"]
itct = response[0].match(/itct=(?<itct>[^"]+)"/).not_nil!["itct"]
ctoken = response[0].match(/'COMMENTS_TOKEN': "(?<ctoken>[^"]+)"/)
client = response[1]
headers = response[2]
break
end
end
end
if !ctoken
if format == "json"
next {"comments" => [] of String}.to_json
else
@ -1843,7 +1924,6 @@ get "/api/v1/comments/:id" do |env|
end
end
ctoken = ctoken["ctoken"]
itct = body.match(/itct=(?<itct>[^"]+)"/).not_nil!["itct"]
if env.params.query["continuation"]? && !env.params.query["continuation"].empty?
continuation = env.params.query["continuation"]
@ -1857,10 +1937,16 @@ get "/api/v1/comments/:id" do |env|
}
post_req = HTTP::Params.encode(post_req)
response = client.post("/comment_service_ajax?action_get_comments=1&pbj=1&ctoken=#{ctoken}&continuation=#{continuation}&itct=#{itct}", headers, post_req).body
response = JSON.parse(response)
headers["content-type"] = "application/x-www-form-urlencoded"
env.response.content_type = "application/json"
headers["x-client-data"] = "CIi2yQEIpbbJAQipncoBCNedygEIqKPKAQ=="
headers["x-spf-previous"] = "https://www.youtube.com/watch?v=#{id}&bpctr=#{Time.new.epoch + 2000}&gl=US&hl=en&disable_polymer=1"
headers["x-spf-referer"] = "https://www.youtube.com/watch?v=#{id}&bpctr=#{Time.new.epoch + 2000}&gl=US&hl=en&disable_polymer=1"
headers["x-youtube-client-name"] = "1"
headers["x-youtube-client-version"] = "2.20180719"
response = client.post("/comment_service_ajax?action_get_comments=1&pbj=1&ctoken=#{ctoken}&continuation=#{continuation}&itct=#{itct}&hl=en&gl=US", headers, post_req)
response = JSON.parse(response.body)
if !response["response"]["continuationContents"]?
halt env, status_code: 403
@ -2016,8 +2102,6 @@ get "/api/v1/comments/:id" do |env|
halt env, status_code: 404
end
env.response.content_type = "application/json"
if format == "json"
reddit_thread = JSON.parse(reddit_thread.to_json).as_h
reddit_thread["comments"] = JSON.parse(comments.to_json)
@ -2038,7 +2122,7 @@ get "/api/v1/insights/:id" do |env|
client = make_client(YT_URL)
headers = HTTP::Headers.new
html = client.get("/watch?v=#{id}&disable_polymer=1")
html = client.get("/watch?v=#{id}&gl=US&hl=en&disable_polymer=1")
headers["cookie"] = html.cookies.add_request_headers(headers)["cookie"]
headers["content-type"] = "application/x-www-form-urlencoded"
@ -2113,12 +2197,13 @@ get "/api/v1/insights/:id" do |env|
end
get "/api/v1/videos/:id" do |env|
env.response.content_type = "application/json"
id = env.params.url["id"]
begin
video = get_video(id, PG_DB)
video = get_video(id, PG_DB, proxies)
rescue ex
env.response.content_type = "application/json"
error_message = {"error" => ex.message}.to_json
halt env, status_code: 500, response: error_message
end
@ -2128,7 +2213,6 @@ get "/api/v1/videos/:id" do |env|
captions = video.captions
env.response.content_type = "application/json"
video_info = JSON.build do |json|
json.object do
json.field "title", video.title
@ -2824,12 +2908,17 @@ get "/api/v1/playlists/:plid" do |env|
begin
playlist = fetch_playlist(plid)
videos = fetch_playlist_videos(plid, page, playlist.video_count)
rescue ex
error_message = {"error" => "Playlist is empty"}.to_json
halt env, status_code: 404, response: error_message
end
begin
videos = fetch_playlist_videos(plid, page, playlist.video_count)
rescue ex
videos = [] of PlaylistVideo
end
response = JSON.build do |json|
json.object do
json.field "title", playlist.title
@ -2839,6 +2928,20 @@ get "/api/v1/playlists/:plid" do |env|
json.field "authorId", playlist.ucid
json.field "authorUrl", "/channel/#{playlist.ucid}"
json.field "authorThumbnails" do
json.array do
qualities = [32, 48, 76, 100, 176, 512]
qualities.each do |quality|
json.object do
json.field "url", playlist.author_thumbnail.gsub("=s100-", "=s#{quality}-")
json.field "width", quality
json.field "height", quality
end
end
end
end
json.field "description", playlist.description
json.field "descriptionHtml", playlist.description_html
json.field "videoCount", playlist.video_count
@ -2873,6 +2976,55 @@ get "/api/v1/playlists/:plid" do |env|
response
end
get "/api/v1/mixes/:rdid" do |env|
env.response.content_type = "application/json"
rdid = env.params.url["rdid"]
continuation = env.params.query["continuation"]?
continuation ||= rdid.lchop("RD")
begin
mix = fetch_mix(rdid, continuation)
rescue ex
error_message = {"error" => ex.message}.to_json
halt env, status_code: 500, response: error_message
end
response = JSON.build do |json|
json.object do
json.field "title", mix.title
json.field "mixId", mix.id
json.field "videos" do
json.array do
mix.videos.each do |video|
json.object do
json.field "title", video.title
json.field "videoId", video.id
json.field "author", video.author
json.field "authorId", video.ucid
json.field "authorUrl", "/channel/#{video.ucid}"
json.field "videoThumbnails" do
json.array do
generate_thumbnails(json, video.id)
end
end
json.field "index", video.index
json.field "lengthSeconds", video.length_seconds
end
end
end
end
end
end
response
end
get "/api/manifest/dash/id/videoplayback" do |env|
env.response.headers["Access-Control-Allow-Origin"] = "*"
env.redirect "/videoplayback?#{env.params.query}"
@ -2892,7 +3044,7 @@ get "/api/manifest/dash/id/:id" do |env|
client = make_client(YT_URL)
begin
video = get_video(id, PG_DB)
video = get_video(id, PG_DB, proxies)
rescue ex
halt env, status_code: 403
end
@ -3078,8 +3230,40 @@ get "/videoplayback" do |env|
host = "https://r#{fvip}---#{mn}.googlevideo.com"
url = "/videoplayback?#{query_params.to_s}"
client = make_client(URI.parse(host))
response = client.head(url)
if query_params["region"]?
client = make_client(URI.parse(host))
response = HTTP::Client::Response.new(status_code: 403)
if !proxies[query_params["region"]]?
halt env, status_code: 403
end
proxies[query_params["region"]].each do |proxy|
begin
client = HTTPClient.new(URI.parse(host))
client.read_timeout = 10.seconds
client.connect_timeout = 10.seconds
proxy = HTTPProxy.new(proxy_host: proxy[:ip], proxy_port: proxy[:port])
client.set_proxy(proxy)
response = client.head(url)
if response.status_code == 200
# For whatever reason the proxy needs to be set again
client.set_proxy(proxy)
break
end
rescue ex
end
end
else
client = make_client(URI.parse(host))
response = client.head(url)
end
if response.status_code != 200
halt env, status_code: 403
end
if response.headers["Location"]?
url = URI.parse(response.headers["Location"])

View File

@ -104,13 +104,17 @@ def fetch_channel(ucid, client, db, pull_all_videos = true)
videos.each do |video|
ids << video.id
db.exec("UPDATE users SET notifications = notifications || $1 \
# FIXME: Red videos don't provide published date, so the best we can do is ignore them
if Time.now - video.published > 1.minute
db.exec("UPDATE users SET notifications = notifications || $1 \
WHERE updated < $2 AND $3 = ANY(subscriptions) AND $1 <> ALL(notifications)", video.id, video.published, video.ucid)
video_array = video.to_a
args = arg_array(video_array)
db.exec("INSERT INTO channel_videos VALUES (#{args}) ON CONFLICT (id) DO UPDATE SET title = $2, \
video_array = video.to_a
args = arg_array(video_array)
db.exec("INSERT INTO channel_videos VALUES (#{args}) ON CONFLICT (id) DO UPDATE SET title = $2, \
published = $3, updated = $4, ucid = $5, author = $6", video_array)
end
end
if count < 30

View File

@ -244,11 +244,22 @@ def extract_items(nodeset, ucid = nil)
plid = HTTP::Params.parse(URI.parse(id).query.not_nil!)["list"]
anchor = node.xpath_node(%q(.//div[contains(@class, "yt-lockup-meta")]/a))
if !anchor
anchor = node.xpath_node(%q(.//ul[@class="yt-lockup-meta-info"]/li/a))
end
if anchor
video_count = anchor.content.match(/View full playlist \((?<count>\d+)/).try &.["count"].to_i?
video_count = node.xpath_node(%q(.//span[@class="formatted-video-count-label"]/b))
if video_count
video_count = video_count.content
if video_count == "50+"
author = "YouTube"
author_id = "UC-9-kyTW8ZkZNDHQJ6FgpwQ"
video_count = video_count.rchop("+")
end
video_count = video_count.to_i?
end
video_count ||= 0

View File

@ -89,6 +89,49 @@ class HTTPClient < HTTP::Client
end
def get_proxies(country_code = "US")
# return get_spys_proxies(country_code)
return get_nova_proxies(country_code)
end
def get_nova_proxies(country_code = "US")
country_code = country_code.downcase
client = HTTP::Client.new(URI.parse("https://www.proxynova.com"))
client.read_timeout = 10.seconds
client.connect_timeout = 10.seconds
headers = HTTP::Headers.new
headers["User-Agent"] = "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.106 Safari/537.36"
headers["Accept"] = "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8"
headers["Accept-Language"] = "Accept-Language: en-US,en;q=0.9"
headers["Host"] = "www.proxynova.com"
headers["Origin"] = "https://www.proxynova.com"
headers["Referer"] = "https://www.proxynova.com/proxy-server-list/country-#{country_code}/"
response = client.get("/proxy-server-list/country-#{country_code}/", headers)
document = XML.parse_html(response.body)
proxies = [] of {ip: String, port: Int32, score: Float64}
document.xpath_nodes(%q(//tr[@data-proxy-id])).each do |node|
ip = node.xpath_node(%q(.//td/abbr/script)).not_nil!.content
ip = ip.match(/document\.write\('(?<sub1>[^']+)'.substr\(8\) \+ '(?<sub2>[^']+)'/).not_nil!
ip = "#{ip["sub1"][8..-1]}#{ip["sub2"]}"
port = node.xpath_node(%q(.//td[2])).not_nil!.content.strip.to_i
anchor = node.xpath_node(%q(.//td[4]/div)).not_nil!
speed = anchor["data-value"].to_f
latency = anchor["title"].to_f
uptime = node.xpath_node(%q(.//td[5]/span)).not_nil!.content.rchop("%").to_f
# TODO: Tweak me
score = (uptime*4 + speed*2 + latency)/7
proxies << {ip: ip, port: port, score: score}
end
proxies = proxies.sort_by { |proxy| proxy[:score] }.reverse
return proxies
end
def get_spys_proxies(country_code = "US")
client = HTTP::Client.new(URI.parse("http://spys.one"))
client.read_timeout = 10.seconds
client.connect_timeout = 10.seconds
@ -108,7 +151,15 @@ def get_proxies(country_code = "US")
"xf4" => "0",
"xf5" => "1",
}
response = client.post("/free-proxy-list/#{country_code}/", headers, form: body)
20.times do
if response.status_code == 200
break
end
response = client.post("/free-proxy-list/#{country_code}/", headers, form: body)
end
response = XML.parse_html(response.body)
mapping = response.xpath_node(%q(.//body/script)).not_nil!.content

View File

@ -154,3 +154,41 @@ def update_decrypt_function
Fiber.yield
end
end
def find_working_proxies(regions)
proxy_channel = Channel({String, Array({ip: String, port: Int32})}).new
regions.each do |region|
spawn do
loop do
begin
proxies = get_proxies(region).first(20)
rescue ex
next proxy_channel.send({region, Array({ip: String, port: Int32}).new})
end
proxies.select! do |proxy|
begin
client = HTTPClient.new(YT_URL)
client.read_timeout = 10.seconds
client.connect_timeout = 10.seconds
proxy = HTTPProxy.new(proxy_host: proxy[:ip], proxy_port: proxy[:port])
client.set_proxy(proxy)
client.get("/").status_code == 200
rescue ex
false
end
end
proxies = proxies.map { |proxy| {ip: proxy[:ip], port: proxy[:port]} }
proxy_channel.send({region, proxies})
end
end
end
loop do
yield proxy_channel.receive
end
end

74
src/invidious/mixes.cr Normal file
View File

@ -0,0 +1,74 @@
class MixVideo
add_mapping({
title: String,
id: String,
author: String,
ucid: String,
length_seconds: Int32,
index: Int32,
})
end
class Mix
add_mapping({
title: String,
id: String,
videos: Array(MixVideo),
})
end
def fetch_mix(rdid, video_id, cookies = nil)
client = make_client(YT_URL)
headers = HTTP::Headers.new
headers["User-Agent"] = "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36"
if cookies
headers = cookies.add_request_headers(headers)
end
response = client.get("/watch?v=#{video_id}&list=#{rdid}&bpctr=#{Time.new.epoch + 2000}&gl=US&hl=en", headers)
yt_data = response.body.match(/window\["ytInitialData"\] = (?<data>.*);/)
if yt_data
yt_data = JSON.parse(yt_data["data"].rchop(";"))
else
raise "Could not create mix."
end
playlist = yt_data["contents"]["twoColumnWatchNextResults"]["playlist"]["playlist"]
mix_title = playlist["title"].as_s
contents = playlist["contents"].as_a
until contents[0]["playlistPanelVideoRenderer"]["videoId"].as_s == video_id
contents.shift
end
videos = [] of MixVideo
contents.each do |item|
item = item["playlistPanelVideoRenderer"]
id = item["videoId"].as_s
title = item["title"]["simpleText"].as_s
author = item["longBylineText"]["runs"][0]["text"].as_s
ucid = item["longBylineText"]["runs"][0]["navigationEndpoint"]["browseEndpoint"]["browseId"].as_s
length_seconds = decode_length_seconds(item["lengthText"]["simpleText"].as_s)
index = item["navigationEndpoint"]["watchEndpoint"]["index"].as_i
videos << MixVideo.new(
title,
id,
author,
ucid,
length_seconds,
index
)
end
if !cookies
next_page = fetch_mix(rdid, videos[-1].id, response.cookies)
videos += next_page.videos
end
videos.uniq! { |video| video.id }
videos = videos.first(50)
return Mix.new(mix_title, rdid, videos)
end

View File

@ -1,17 +1,3 @@
class Playlist
add_mapping({
title: String,
id: String,
author: String,
ucid: String,
description: String,
description_html: String,
video_count: Int32,
views: Int64,
updated: Time,
})
end
class PlaylistVideo
add_mapping({
title: String,
@ -25,6 +11,21 @@ class PlaylistVideo
})
end
class Playlist
add_mapping({
title: String,
id: String,
author: String,
author_thumbnail: String,
ucid: String,
description: String,
description_html: String,
video_count: Int32,
views: Int64,
updated: Time,
})
end
def fetch_playlist_videos(plid, page, video_count)
client = make_client(YT_URL)
@ -42,10 +43,12 @@ def fetch_playlist_videos(plid, page, video_count)
nodeset = document.xpath_nodes(%q(.//tr[contains(@class, "pl-video")]))
videos = extract_playlist(plid, nodeset, index)
else
# Playlist has less than one page of videos, so subsequent pages will be empty
if page > 1
videos = [] of PlaylistVideo
else
response = client.get("/playlist?list=#{plid}&disable_polymer=1")
# Extract first page of videos
response = client.get("/playlist?list=#{plid}&gl=US&hl=en&disable_polymer=1")
document = XML.parse_html(response.body)
nodeset = document.xpath_nodes(%q(.//tr[contains(@class, "pl-video")]))
@ -105,7 +108,8 @@ def produce_playlist_url(id, index)
end
ucid = "VL" + id
meta = "\x08#{write_var_int(index).join}"
meta = [0x08_u8] + write_var_int(index)
meta = Slice.new(meta.to_unsafe, meta.size)
meta = Base64.urlsafe_encode(meta, false)
meta = "PT:#{meta}"
@ -141,7 +145,7 @@ def fetch_playlist(plid)
plid = "UU#{plid.lchop("UC")}"
end
response = client.get("/playlist?list=#{plid}&disable_polymer=1")
response = client.get("/playlist?list=#{plid}&hl=en&disable_polymer=1")
if response.status_code != 200
raise "Invalid playlist."
end
@ -160,10 +164,13 @@ def fetch_playlist(plid)
title = title.content.strip(" \n")
description_html = document.xpath_node(%q(//span[@class="pl-header-description-text"]/div/div[1]))
description_html ||= document.xpath_node(%q(//span[@class="pl-header-description-text"]))
description_html, description = html_to_content(description_html)
anchor = document.xpath_node(%q(//ul[@class="pl-header-details"])).not_nil!
author = anchor.xpath_node(%q(.//li[1]/a)).not_nil!.content
author_thumbnail = document.xpath_node(%q(//img[@class="channel-header-profile-image"])).try &.["src"]
author_thumbnail ||= ""
ucid = anchor.xpath_node(%q(.//li[1]/a)).not_nil!["href"].split("/")[2]
video_count = anchor.xpath_node(%q(.//li[2])).not_nil!.content.delete("videos, ").to_i
@ -181,6 +188,7 @@ def fetch_playlist(plid)
title,
plid,
author,
author_thumbnail,
ucid,
description,
description_html,

View File

@ -89,7 +89,7 @@ def search(query, page = 1, search_params = produce_search_params(content_type:
return {0, [] of SearchItem}
end
html = client.get("/results?q=#{URI.escape(query)}&page=#{page}&sp=#{search_params}&disable_polymer=1").body
html = client.get("/results?q=#{URI.escape(query)}&page=#{page}&sp=#{search_params}&hl=en&disable_polymer=1").body
if html.empty?
return {0, [] of SearchItem}
end

View File

@ -110,7 +110,7 @@ CAPTION_LANGUAGES = {
REGIONS = {"AD", "AE", "AF", "AG", "AI", "AL", "AM", "AO", "AQ", "AR", "AS", "AT", "AU", "AW", "AX", "AZ", "BA", "BB", "BD", "BE", "BF", "BG", "BH", "BI", "BJ", "BL", "BM", "BN", "BO", "BQ", "BR", "BS", "BT", "BV", "BW", "BY", "BZ", "CA", "CC", "CD", "CF", "CG", "CH", "CI", "CK", "CL", "CM", "CN", "CO", "CR", "CU", "CV", "CW", "CX", "CY", "CZ", "DE", "DJ", "DK", "DM", "DO", "DZ", "EC", "EE", "EG", "EH", "ER", "ES", "ET", "FI", "FJ", "FK", "FM", "FO", "FR", "GA", "GB", "GD", "GE", "GF", "GG", "GH", "GI", "GL", "GM", "GN", "GP", "GQ", "GR", "GS", "GT", "GU", "GW", "GY", "HK", "HM", "HN", "HR", "HT", "HU", "ID", "IE", "IL", "IM", "IN", "IO", "IQ", "IR", "IS", "IT", "JE", "JM", "JO", "JP", "KE", "KG", "KH", "KI", "KM", "KN", "KP", "KR", "KW", "KY", "KZ", "LA", "LB", "LC", "LI", "LK", "LR", "LS", "LT", "LU", "LV", "LY", "MA", "MC", "MD", "ME", "MF", "MG", "MH", "MK", "ML", "MM", "MN", "MO", "MP", "MQ", "MR", "MS", "MT", "MU", "MV", "MW", "MX", "MY", "MZ", "NA", "NC", "NE", "NF", "NG", "NI", "NL", "NO", "NP", "NR", "NU", "NZ", "OM", "PA", "PE", "PF", "PG", "PH", "PK", "PL", "PM", "PN", "PR", "PS", "PT", "PW", "PY", "QA", "RE", "RO", "RS", "RU", "RW", "SA", "SB", "SC", "SD", "SE", "SG", "SH", "SI", "SJ", "SK", "SL", "SM", "SN", "SO", "SR", "SS", "ST", "SV", "SX", "SY", "SZ", "TC", "TD", "TF", "TG", "TH", "TJ", "TK", "TL", "TM", "TN", "TO", "TR", "TT", "TV", "TW", "TZ", "UA", "UG", "UM", "US", "UY", "UZ", "VA", "VC", "VE", "VG", "VI", "VN", "VU", "WF", "WS", "YE", "YT", "ZA", "ZM", "ZW"}
BYPASS_REGIONS = {
"UK",
"GB",
"DE",
"FR",
"IN",
@ -129,7 +129,6 @@ BYPASS_REGIONS = {
"ID",
"BD",
"MX",
"ET",
"PH",
"EG",
"VN",
@ -274,6 +273,12 @@ class Video
streams.each { |s| s.add("label", "#{s["quality"]} - #{s["type"].split(";")[0].split("/")[1]}") }
streams = streams.uniq { |s| s["label"] }
if self.info["region"]?
streams.each do |fmt|
fmt["url"] += "&region=" + self.info["region"]
end
end
if streams[0]? && streams[0]["s"]?
streams.each do |fmt|
fmt["url"] += "&signature=" + decrypt_signature(fmt["s"], decrypt_function)
@ -363,6 +368,12 @@ class Video
end
end
if self.info["region"]?
adaptive_fmts.each do |fmt|
fmt["url"] += "&region=" + self.info["region"]
end
end
if adaptive_fmts[0]? && adaptive_fmts[0]["s"]?
adaptive_fmts.each do |fmt|
fmt["url"] += "&signature=" + decrypt_signature(fmt["s"], decrypt_function)
@ -466,14 +477,14 @@ class CaptionName
)
end
def get_video(id, db, refresh = true)
def get_video(id, db, proxies = {} of String => Array({ip: String, port: Int32}), refresh = true)
if db.query_one?("SELECT EXISTS (SELECT true FROM videos WHERE id = $1)", id, as: Bool)
video = db.query_one("SELECT * FROM videos WHERE id = $1", id, as: Video)
# If record was last updated over 10 minutes ago, refresh (expire param in response lasts for 6 hours)
if refresh && Time.now - video.updated > 10.minutes
begin
video = fetch_video(id)
video = fetch_video(id, proxies)
video_array = video.to_a
args = arg_array(video_array[1..-1], 2)
@ -488,7 +499,7 @@ def get_video(id, db, refresh = true)
end
end
else
video = fetch_video(id)
video = fetch_video(id, proxies)
video_array = video.to_a
args = arg_array(video_array)
@ -499,13 +510,13 @@ def get_video(id, db, refresh = true)
return video
end
def fetch_video(id)
def fetch_video(id, proxies)
html_channel = Channel(XML::Node).new
info_channel = Channel(HTTP::Params).new
spawn do
client = make_client(YT_URL)
html = client.get("/watch?v=#{id}&bpctr=#{Time.new.epoch + 2000}&disable_polymer=1")
html = client.get("/watch?v=#{id}&bpctr=#{Time.new.epoch + 2000}&gl=US&hl=en&disable_polymer=1")
html = XML.parse_html(html.body)
html_channel.send(html)
@ -528,45 +539,58 @@ def fetch_video(id)
info = info_channel.receive
if info["reason"]? && info["reason"].includes? "your country"
bypass_channel = Channel({HTTP::Params | Nil, XML::Node | Nil}).new
bypass_channel = Channel(HTTPProxy | Nil).new
BYPASS_REGIONS.each do |country_code|
proxies.each do |region, list|
spawn do
begin
proxies = get_proxies(country_code)
list.each do |proxy|
begin
client = HTTPClient.new(YT_URL)
client.read_timeout = 10.seconds
client.connect_timeout = 10.seconds
# Try not to overload single proxy
proxy = proxies[0, 5].sample(1)[0]
proxy = HTTPProxy.new(proxy_host: proxy[:ip], proxy_port: proxy[:port])
proxy = HTTPProxy.new(proxy_host: proxy[:ip], proxy_port: proxy[:port])
client.set_proxy(proxy)
client = HTTPClient.new(URI.parse("https://www.youtube.com"))
client.read_timeout = 10.seconds
client.connect_timeout = 10.seconds
client.set_proxy(proxy)
info = HTTP::Params.parse(client.get("/get_video_info?video_id=#{id}&ps=default&eurl=&gl=US&hl=en&disable_polymer=1").body)
if !info["reason"]?
bypass_channel.send(proxy)
else
bypass_channel.send(nil)
end
proxy_info = client.get("/get_video_info?video_id=#{id}&el=detailpage&ps=default&eurl=&gl=US&hl=en&disable_polymer=1")
proxy_info = HTTP::Params.parse(proxy_info.body)
if !proxy_info["reason"]?
proxy_html = client.get("/watch?v=#{id}&bpctr=#{Time.new.epoch + 2000}&gl=US&hl=en&disable_polymer=1")
proxy_html = XML.parse_html(proxy_html.body)
bypass_channel.send({proxy_info, proxy_html})
else
bypass_channel.send({nil, nil})
break
rescue ex
end
rescue ex
bypass_channel.send({nil, nil})
end
end
end
BYPASS_REGIONS.size.times do
response = bypass_channel.receive
if response[0] || response[1]
info = response[0].not_nil!
html = response[1].not_nil!
break
proxies.size.times do
proxy = bypass_channel.receive
if proxy
begin
client = HTTPClient.new(YT_URL)
client.read_timeout = 10.seconds
client.connect_timeout = 10.seconds
client.set_proxy(proxy)
html = XML.parse_html(client.get("/watch?v=#{id}&bpctr=#{Time.new.epoch + 2000}&gl=US&hl=en&disable_polymer=1").body)
info = HTTP::Params.parse(client.get("/get_video_info?video_id=#{id}&el=detailpage&ps=default&eurl=&gl=US&hl=en&disable_polymer=1").body)
if info["reason"]?
info = HTTP::Params.parse(client.get("/get_video_info?video_id=#{id}&ps=default&eurl=&gl=US&hl=en&disable_polymer=1").body)
end
proxy = {ip: proxy.proxy_host, port: proxy.proxy_port}
region = proxies.select { |region, list| list.includes? proxy }
if !region.empty?
info["region"] = region.keys[0]
end
break
rescue ex
end
end
end
end
@ -603,10 +627,15 @@ def fetch_video(id)
genre = html.xpath_node(%q(//meta[@itemprop="genre"])).not_nil!["content"]
genre_url = html.xpath_node(%(//a[text()="#{genre}"])).try &.["href"]
if genre == "Movies"
genre_url ||= "/channel/UClgRkhTL3_hImCAmdLfDE4g"
case genre
when "Movies"
genre_url = "/channel/UClgRkhTL3_hImCAmdLfDE4g"
when "Education"
# Education channel is linked but does not exist
# genre_url = "/channel/UC3yA8nDwraeOfnYfBWun83g"
genre_url = ""
end
genre_url = ""
genre_url ||= ""
license = html.xpath_node(%q(//h4[contains(text(),"License")]/parent::*/ul/li))
if license

View File

@ -14,7 +14,12 @@
<p><%= number_with_separator(item.subscriber_count) %> subscribers</p>
<h5><%= item.description_html %></h5>
<% when SearchPlaylist %>
<a style="width:100%;" href="/playlist?list=<%= item.id %>">
<% if item.id.starts_with? "RD" %>
<% url = "/mix?list=#{item.id}&continuation=#{item.videos[0]?.try &.id}" %>
<% else %>
<% url = "/playlist?list=#{item.id}" %>
<% end %>
<a style="width:100%;" href="<%= url %>">
<% if env.get?("user") && env.get("user").as(User).preferences.thin_mode %>
<% else %>
<img style="width:100%;" src="/vi/<%= item.videos[0]?.try &.id %>/mqdefault.jpg"/>
@ -26,6 +31,17 @@
</p>
<p><%= number_with_separator(item.video_count) %> videos</p>
<p>PLAYLIST</p>
<% when MixVideo %>
<a style="width:100%;" href="/watch?v=<%= item.id %>">
<% if env.get?("user") && env.get("user").as(User).preferences.thin_mode %>
<% else %>
<img style="width:100%;" src="/vi/<%= item.id %>/mqdefault.jpg"/>
<% end %>
<p><%= item.title %></p>
</a>
<p>
<b><a style="width:100%;" href="/channel/<%= item.ucid %>"><%= item.author %></a></b>
</p>
<% else %>
<% if item.responds_to?(:playlists) && !item.playlists.empty? %>
<% params = "&list=#{item.playlists[0]}" %>

View File

@ -13,7 +13,7 @@
</div>
<div class="pure-control-group">
<label for="import_youtube">Import <a target="_blank"
<label for="import_youtube">Import <a rel="noopener" target="_blank"
href="https://support.google.com/youtube/answer/6224202?hl=en-GB">YouTube subscriptions</a></label>
<input type="file" id="import_youtube" name="import_youtube">
</div>

View File

@ -0,0 +1,22 @@
<% content_for "header" do %>
<title><%= mix.title %> - Invidious</title>
<% end %>
<div class="pure-g h-box">
<div class="pure-u-2-3">
<h3><%= mix.title %></h3>
</div>
<div class="pure-u-1-3" style="text-align:right;">
<h3>
<a href="/feed/playlist/<%= mix.id %>"><i class="icon ion-logo-rss"></i></a>
</h3>
</div>
</div>
<% mix.videos.each_slice(4) do |slice| %>
<div class="pure-g">
<% slice.each do |item| %>
<%= rendered "components/item" %>
<% end %>
</div>
<% end %>

View File

@ -35,7 +35,7 @@
<div class="pure-g h-box">
<div class="pure-u-1 pure-u-md-1-5">
<% if page >= 2 %>
<a href="/playlist?list=<%= playlist.id %>&page=<%= page - 1 %>">Next page</a>
<a href="/playlist?list=<%= playlist.id %>&page=<%= page - 1 %>">Previous page</a>
<% end %>
</div>
<div class="pure-u-1 pure-u-md-3-5"></div>

View File

@ -28,7 +28,7 @@
<div class="pure-u-1 pure-u-md-12-24 searchbar">
<form class="pure-form" action="/search" method="get">
<fieldset>
<input type="search" style="width:100%;" name="q" placeholder="search" value="<%= env.params.query["q"]? || env.get? "search" %>">
<input type="search" style="width:100%;" name="q" placeholder="search" value="<%= env.params.query["q"]?.try {|x| HTML.escape(x)} || env.get?("search").try {|x| HTML.escape(x.as(String)) } %>">
</fieldset>
</form>
</div>

View File

@ -55,7 +55,13 @@
<p><i class="icon ion-ios-eye"></i> <%= number_with_separator(video.views) %></p>
<p><i class="icon ion-ios-thumbs-up"></i> <%= number_with_separator(video.likes) %></p>
<p><i class="icon ion-ios-thumbs-down"></i> <%= number_with_separator(video.dislikes) %></p>
<p id="Genre">Genre: <a href="<%= video.genre_url %>"><%= video.genre %></a></p>
<p id="Genre">Genre:
<% if video.genre_url.empty? %>
<%= video.genre %>
<% else %>
<a href="<%= video.genre_url %>"><%= video.genre %></a>
<% end %>
</p>
<% if !video.license.empty? %>
<p id="License">License: <%= video.license %></p>
<% end %>
@ -212,7 +218,7 @@ function get_reddit_comments() {
{title} \
</h3> \
<b> \
<a target="_blank" href="https://reddit.com{permalink}">View more comments on Reddit</a> \
<a rel="noopener" target="_blank" href="https://reddit.com{permalink}">View more comments on Reddit</a> \
</b> \
</div> \
<div>{contentHtml}</div> \