131 private links
Tutanota is the secure email service, built in Germany. Use encrypted emails on all devices with our open source email client, mobile apps & desktop clients.
Is there a way to conveniently define a C-like structure in Python? I'm tired of writing stuff like:
class MyStruct():
def __init__(self, field1, field2, field3):
self.field1 = field1
self.field2 = field2
self.field3 = field3
Build apps with radically less overhead and cost.
Serverless computing is transforming traditional software development. These open source platforms will help you get started.
PyChess is a gtk chess client, originally developed for GNOME, but running well under all other linux desktops. (Which we know of, at least). PyChess is 100% python code, from the top of the UI to the bottom of the chess engine.
CLI based audio visualizer.
Command line visualizer. Supports mpd, with experimental support for alsa and pulseaudio.
cli-visualizer is a command-line visualizer. Music visualization generates animated imagery based on a piece of music. Free and open source software.
It will make your productivity plummet
Unison is a file-synchronization tool for OSX, Unix, and Windows. It allows two replicas of a collection of files and directories to be stored on different hosts (or different disks on the same host), modified separately, and then brought up to date by propagating the changes in each replica to the other.
Servizio di hosting cloud per applicazioni professionali.
The people who created C sure loved keeping the number of keywords low, and today I’m going to show you yet another place you can use the static
keyword in C99.
You might have seen function parameter declaration for array parameters that include the size:
void foo(int myArray[10]);
The function will still receive a naked int *
, but the [10]
part can serve as documentation for the people reading the code, saying that the function expects an array of 10 ints.
But, you can actually also use the keyword static
between the brackets:
void bar(int myArray[static 10]);
This tells the compiler that it should assume that the array passed to bar has at least 10 elements. (Note that this rules out a NULL
pointer!)
TOM (TOpic Modeling) is a Python 3 library for topic modeling and browsing, licensed under the MIT license.
Its objective is to allow for an efficient analysis of a text corpus from start to finish, via the discovery of latent topics. To this end, TOM features functions for preparing and vectorizing a text corpus. It also offers a common interface for two topic models (namely LDA using either variational inference or Gibbs sampling, and NMF using alternating least-square with a projected gradient method), and implements three state-of-the-art methods for estimating the optimal number of topics to model a corpus. What is more, TOM constructs an interactive Web-based browser that makes it easy to explore a topic model and the related corpus.
N-grams have been a common tool for information retrieval and machine learning applications for decades. In nearly all previous works, only a few values of $n$ are tested, with $n > 6$ being exceedingly rare. Larger values of $n$ are not tested due to computational burden or the fear of overfitting.
In this work, we present a method to find the top-$k$ most frequent $n$-grams that is 60$\times$ faster for small $n$, and can tackle large $n\geq1024$. Despite the unprecedented size of $n$ considered, we show how these features still have predictive ability for malware classification tasks. More important, large $n$-grams provide benefits in producing features that are interpretable by malware analysis, and can be used to create general purpose signatures compatible with industry standard tools like Yara. Furthermore, the counts of common $n$-grams in a file may be added as features to publicly available human-engineered features that rival efficacy of professionally-developed features when used to train gradient-boosted decision tree models on the EMBER dataset.
An easier way to build and share serverless applications w/ the Serverless Framework.
Generate and store secure passwords. Everything is accessible only to you on our No Knowledge cloud, whether you're on your phone or at your desk. It's totally free!
With the increasing number of scientific publications, the analysis of the trends and the state-of-the-art in a certain scientific field is becoming very time-consuming and tedious task. In response to urgent needs of information, for which the existing systematic review model does not well, several other review types have emerged, namely the rapid review and scoping reviews.
The paper proposes an NLP powered tool that automates most of the review process by automatic analysis of articles indexed in the IEEE Xplore, PubMed, and Springer digital libraries. We demonstrate the applicability of the toolkit by analyzing articles related to Enhanced Living Environments and Ambient Assisted Living, in accordance with the PRISMA surveying methodology. The relevant articles were processed by the NLP toolkit to identify articles that contain up to 20 properties clustered into 4 logical groups.
The analysis showed increasing attention from the scientific communities towards Enhanced and Assisted living environments over the last 10 years and showed several trends in the specific research topics that fall into this scope. The case study demonstrates that the NLP toolkit can ease and speed up the review process and show valuable insights from the surveyed articles even without manually reading of most of the articles. Moreover, it pinpoints the most relevant articles which contain more properties and therefore, significantly reduces the manual work, while also generating informative tables, charts and graphs.
Supplementary Materials for the paper Tshitoyan et al. "Unsupervised word embeddings capture latent knowledge from materials science literature", Nature (2019).
In a nutshell, it is a type of statistical model used for tagging abstract “topics” that occur in a collection of documents that best represents the information in them.
Many techniques are used to obtain topic models. This post aims to demonstrate the implementation of LDA: a widely used topic modeling technique.
pyLDAvis
is a python library for interactive topic model visualization. It is a port of the fabulous R package by Carson Sievert and Kenny Shirley. They did the hard work of crafting an effective visualization. pyLDAvis
makes it easy to use the visualiziation from Python and, in particular, Jupyter notebooks.
To learn more about the method behind the visualization, it is possible to read the original paper explaining it.
This notebook provides a quick overview of how to use pyLDAvis
.
People and assets can be located programmatically. Estimote's invisible technology makes things happen magically in the right place and at the right time.