How to import data of WordPress (Feed RSS) to Joplin ?

33 x served & 11 x viewed

Install JOPLIN : https://joplin.cozic.net ,  and start REST API. (Easy)

Step 1 : Put this script in folder.

Step 2 : Edit the script and put your token 

Step 3 : Run the script

The script :

#
# Version 1 
# for Python 3
# 
#   ARIAS Frederic
#   Sorry ... It's difficult for me the python :)
#

import feedparser
from os import listdir
from pathlib import Path
import glob
import csv
import locale
import os
import time
from datetime import datetime
import json
import requests

#Token
ip = "127.0.0.1"
port = "41184"
token = "Put your token here"

nb_import = 0;
headers = {'Content-type': 'application/json', 'Accept': 'text/plain'}

url_notes = (
    "http://"+ip+":"+port+"/notes?"
    "token="+token
)
url_folders = (
    "http://"+ip+":"+port+"/folders?"
    "token="+token
)
url_tags = (
    "http://"+ip+":"+port+"/tags?"
    "token="+token
)
url_ressources = (
    "http://"+ip+":"+port+"/ressources?"
    "token="+token
)

#Init
Wordpress_UID = "12345678901234567801234567890123"
UID = {}

payload = {
    "id":Wordpress_UID,
    "title":"Wordpress Import"
}

try:
    resp = requests.post(url_folders, data=json.dumps(payload, separators=(',',':')), headers=headers)
    resp.raise_for_status()
    resp_dict = resp.json()
    print(resp_dict)
    print("My ID")
    print(resp_dict['id'])
    WordPress_UID_real = resp_dict['id']
    save = str(resp_dict['id'])
    UID[Wordpress_UID]= save
except requests.exceptions.HTTPError as e:
    print("Bad HTTP status code:", e)
except requests.exceptions.RequestException as e:
    print("Network error:", e)

feed = feedparser.parse("http://www.cyber-neurones.org/feed/")

feed_title = feed['feed']['title']
feed_entries = feed.entries

numero = -2
nb_entries = 1
nb_metadata_import = 1

while nb_entries > 0 : 
  print ("----- Page ",numero,"-------")
  numero += 2
  url = "http://www.cyber-neurones.org/feed/?paged="+str(numero)
  feed = feedparser.parse(url)
  feed_title = feed['feed']['title']
  feed_entries = feed.entries
  nb_entries = len(feed['entries'])
  for entry in feed.entries:
     nb_metadata_import += 1
     my_title = entry.title
     my_link = entry.link
     article_published_at = entry.published # Unicode string
     article_published_at_parsed = entry.published_parsed # Time object
     article_author = entry.author
     timestamp = time.mktime(entry.published_parsed)*1000
     print("Published at "+article_published_at)
     my_body = entry.description
     payload_note = {
         "parent_id":Wordpress_UID_real,
         "title":my_title,
         "source":"Wordpress",
         "source_url":my_link,
         "order":nb_metadata_import,
         "user_created_time":timestamp,
         "user_updated_time":timestamp,
         "author":article_author,
         "body_html":my_body
         }
     payload_note_put = {
         "source":"Wordpress",
         "order":nb_metadata_import,
         "source_url":my_link,
         "user_created_time":timestamp,
         "user_updated_time":timestamp,
         "author":article_author
         }

     try:
         resp = requests.post(url_notes, json=payload_note)
         resp.raise_for_status()
         resp_dict = resp.json()
         print(resp_dict)
         print(resp_dict['id'])
         myuid= resp_dict['id']
     except requests.exceptions.HTTPError as e:
         print("Bad HTTP status code:", e)
     except requests.exceptions.RequestException as e:
         print("Network error:", e)

     url_notes_put = (
    "http://"+ip+":"+port+"/notes/"+myuid+"?"
    "token="+token
)
     try:
         resp = requests.put(url_notes_put, json=payload_note_put)
         resp.raise_for_status()
         resp_dict = resp.json()
         print(resp_dict)
     except requests.exceptions.HTTPError as e:
         print("Bad HTTP status code:", e)
     except requests.exceptions.RequestException as e:
         print("Network error:", e)

Encore une demande de rançon : 15LZuFSVyDAoaNLtbh4ru7ZQWvZxEosCaf

38 x served & 20 x viewed

Dans la source de l’email : la personne n’est pas à son premier essai : https://www.bitcoinabuse.com/reports/17X5raT9zqDPBi4L8NrvwSQ77LuG9QjFCH .

X-SPAMOUT-IP: 203.239.130.5 (TRUST)
X-Original-SENDERIP: 203.239.130.5
X-SPAMOUT-COUNTRY: KR
X-SPAMOUT-FROM: <jt.joo@elim.net>
X-SPAMOUT-RELAY: IP

Il est déjà dans les abuses : https://www.bitcoinabuse.com/reports/15LZuFSVyDAoaNLtbh4ru7ZQWvZxEosCaf

Voici l’email :

Hi, this account is hacked! Renew the password immediately!
You might not know anything about me and you are probably surprised for what reason you are getting this particular message, proper?
I am ahacker who burstyour emailand all devicessome time ago.
Do not attempt to msg me or look for me, it is hopeless, because I sent you this message from YOUR account that I've hacked.
I have build in malware soft on the adult vids (porno) site and suppose that you have enjoyed this site to have fun (you understand what I want to say).
During you were watching video clips, your browser started out functioning as a RDP (Remote Control) with a keylogger that granted me permission to access your desktop and camera.
Then, my applicationgotall data.
You have put passcodes on the web-sites you visited, I intercepted them.
Of course, you can modify each of them, or have already changed them.
Even so it does not matter, my malware updates needed data every time.
What did I do?
I made a backup of your device. Of all files and each contact.
I created a dual-screen videofile. The 1st screen reveals the clip you had been watching (you've got an interesting preferences, ha-ha...), and the 2nd part shows the movie from your web camera.
What exactly must you do?
Great, I think, 1000 USD will be a inexpensive amount of money for this very little riddle. You'll make your deposit by bitcoins (in case you don't recognize this, search “how to buy bitcoin” in Google).
My bitcoin wallet address:
15LZuFSVyDAoaNLtbh4ru7ZQWvZxEosCaf
(It is cAsE sensitive, so just copy and paste it).
Warning:
You have only 2 days to make the payment. (I built in an unique pixel in this e-mail, and at the moment I understand that you have read through this email).
To monitorthe reading of a messageand the actionsin it, I usea Facebook pixel. Thanks to them. (Everything thatis appliedfor the authorities can helpus.)

If I do not get bitcoins, I shall undoubtedly offer your video to all your contacts, including family members, colleagues, etc?

How to import data of Google+ to Joplin ?

29 x served & 14 x viewed

Install JOPLIN : https://joplin.cozic.net ,  and start REST API.

Step 1 : Download all with https://takeout.google.com

Step 2 : Uncompress and put all on same folder.

Step 3 : Put this script in folder.

Step 4 : Edit the script and put your token

The script :

#
# Version 1 
# for Python 3
# 
#   ARIAS Frederic
#   Sorry ... It's difficult for me the python :)
#


from os import listdir
from pathlib import Path
import glob
import csv
import locale
import os
import time
from datetime import datetime
import json
import requests

nb_metadata = 0
nb_metadata_import = 0
def month_string_to_number(string):
    m = {
        'janv.': 1,
        'feb.': 2,
        'févr.': 2,
        'mar.': 3,
        'mars': 3,
        'apr.':4,
        'avr.':4,
         'may.':5,
         'mai':5,
         'juin':6,
         'juil.':7,
         'aug.':8,
         'août':8,
         'sept.':9,
         'oct.':10,
         'nov.':11,
         'déc.':12
        }
    s = string.strip()[:5].lower()

    try:
        out = m[s]
        return out
    except:
        raise ValueError('Not a month')

locale.setlocale(locale.LC_TIME, 'fr_FR.UTF-8')
#today = datetime.date.today()
#print(today.strftime('The date :%d %b. %Y à %H:%M:%S UTC'))
from time import strftime,localtime
print(localtime())
print(strftime("%H:%M:%S, %d %b. %Y",localtime()))
date = datetime.strptime('2017-05-04',"%Y-%m-%d")

#Token
ip = "127.0.0.1"
port = "41184"
token = "Put your token here"

nb_import = 0;
headers = {'Content-type': 'application/json', 'Accept': 'text/plain'}

url_notes = (
    "http://"+ip+":"+port+"/notes?"
    "token="+token
)
url_folders = (
    "http://"+ip+":"+port+"/folders?"
    "token="+token
)
url_tags = (
    "http://"+ip+":"+port+"/tags?"
    "token="+token
)
url_ressources = (
    "http://"+ip+":"+port+"/ressources?"
    "token="+token
)

#Init
GooglePlus_UID = "12345678901234567801234567890123"
UID = {}

payload = {
    "id":GooglePlus_UID,
    "title":"GooglePlus Import"
}

try:
    resp = requests.post(url_folders, data=json.dumps(payload, separators=(',',':')), headers=headers)
    resp.raise_for_status()
    resp_dict = resp.json()
    print(resp_dict)
    print("My ID")
    print(resp_dict['id'])
    GooglePlus_UID_real = resp_dict['id']
    save = str(resp_dict['id'])
    UID[GooglePlus_UID]= save
except requests.exceptions.HTTPError as e:
    print("Bad HTTP status code:", e)
except requests.exceptions.RequestException as e:
    print("Network error:", e)

for csvfilename in glob.iglob('Takeout*/**/*.metadata.csv', recursive=True):
  nb_metadata += 1
  print(nb_metadata," ",csvfilename)
  #print("Picture:"+os.path.basename(csvfilename))
  mybasename = os.path.basename(csvfilename)
  mylist = mybasename.split(".")
  myfilename = mylist[0] + "." + mylist[1]
  filename = os.path.dirname(csvfilename)+"/"+myfilename
  my_file = Path(filename)
  with open(csvfilename) as csvfile:
    reader = csv.DictReader(csvfile)
    for row in reader:
        if (len(row['description']) > 0):
            print(row['title'], row['description'], row['creation_time.formatted'], row['geo_data.latitude'], row['geo_data.longitude'])
            #date = datetime.strptime(row['creation_time.formatted'], "%d %b %Y à %H:%M:%S %Z").timetuple()
            #print(date)
            mylist2 = row['creation_time.formatted'].split(" ");
            mylist3 = mylist2[4].split(":");
            date = date.replace(hour=int(mylist3[0]), year=int(mylist2[2]), month=month_string_to_number(mylist2[1]), day=int(mylist2[0]))
            timestamp = time.mktime(date.timetuple())*1000
            print(timestamp)
            nb_metadata_import += 1
            mybody = row['description']
            if (len(row['geo_data.latitude']) > 2):
              payload_note = {
                "parent_id":GooglePlus_UID_real,
                "title":row['creation_time.formatted'],
                "source":myfilename,
                "source_url":row['url'],
                "order":nb_metadata_import,
                "body":mybody
                }
              payload_note_put = {
                "latitude":float(row['geo_data.latitude']),
                "longitude":float(row['geo_data.longitude']),
                "source":myfilename,
                "source_url":row['url'],
                "order":nb_metadata_import,
                "user_created_time":timestamp,
                "user_updated_time":timestamp,
                "author":"Google+"
                }
            else:
               payload_note = {
                "parent_id":GooglePlus_UID_real,
                "title":row['creation_time.formatted'],
                "source":myfilename,
                "source_url":row['url'],
                "order":nb_metadata_import,
                "user_created_time":timestamp,
                "user_updated_time":timestamp,
                "author":"Google+",
                "body":mybody
                }
               payload_note_put = {
                "source":myfilename,
                "order":nb_metadata_import,
                "source_url":row['url'],
                "user_created_time":timestamp,
                "user_updated_time":timestamp,
                "author":"Google+"
                }

            try:
                resp = requests.post(url_notes, json=payload_note)
                resp.raise_for_status()
                resp_dict = resp.json()
                print(resp_dict)
                print(resp_dict['id'])
                myuid= resp_dict['id']
            except requests.exceptions.HTTPError as e:
                print("Bad HTTP status code:", e)
            except requests.exceptions.RequestException as e:
                print("Network error:", e)

            url_notes_put = (
    "http://"+ip+":"+port+"/notes/"+myuid+"?"
    "token="+token
)

            try:
                resp = requests.put(url_notes_put, json=payload_note_put)
                resp.raise_for_status()
                resp_dict = resp.json()
                print(resp_dict)
            except requests.exceptions.HTTPError as e:
                print("Bad HTTP status code:", e)
            except requests.exceptions.RequestException as e:
                print("Network error:", e)
            
            if my_file.is_file():
               cmd = "curl -F 'data=@"+filename+"' -F 'props={\"title\":\""+myfilename+"\"}' http://"+ip+":"+port+"/resources?token="+token
               print("Command"+cmd)
               resp = os.popen(cmd).read()
               try:
                  respj = json.loads(resp)
                  print(respj['id'])
                  myuid_picture= respj['id']
               except:
                  print('bad json: ', resp)

               mybody = row['description'] + "\n  ![" + myfilename + "](:/" + myuid_picture + ")   \n";

               payload_note_put = {
                "body":mybody
                }

               try:
                  resp = requests.put(url_notes_put, json=payload_note_put)
                  resp.raise_for_status()
                  resp_dict = resp.json()
                  print(resp_dict)
               except requests.exceptions.HTTPError as e:
                  print("Bad HTTP status code:", e)
               except requests.exceptions.RequestException as e:
                  print("Network error:", e)

print(nb_metadata)
print(nb_metadata_import)

Facebook : Publicité gratuite !

21 x served & 3 x viewed

Je conseille cette émission : https://www.franceculture.fr/numerique/facebook-15-ans-beaucoup-de-critiques-mais-toujours-plus-damis : Facebook : 15 ans, beaucoup de critiques mais toujours plus d’amis

« #Facebook se fait de l’argent en vendant à des annonceurs votre temps de cerveau disponible. Ce temps va se traduire en grande partie par vos données personnelles. Cela a une valeur fantastique pour des annonceurs qui veulent s’adresser à vous, et pas à votre voisin. »

A voir aussi :

https://haveibeenpwned.com : Afin de voir si notre email est dans une fuite de donnée

Lien

113 x served & 24 x viewed

Je conseille vivement le site : https://haveibeenpwned.com . Il permet de voir si notre email est dans les récentes fuites de données.