Michael Davenport. Sharul Hakimi. Gurpartap Singh. John Phillpotts. Jose Lopez. Mustafa Zarir. Caitlyn Kelleher.
Peter Jay. Relative permeability, hysteresis and I-Sw measurements on a carbonate prospect. Reggie Holt. Anonymous nIcSGEw. Kieen Lee. Amar Danial. Popular in Computing And Information Technology. Star Layered approach for handwritten digit recognition. Ravi Viswanadha. Filip Skultety. Nitesh R Shahani.
AJ Amine. We found that only a very small percentage 6. This indicates that the information provided by current systems is far from sufficient.
We thereby explored what extra information that systems can provide to help users make more informed permission decisions. By surveying users' common concerns on apps' permission requests, we identified five types of information i. We further studied the impact and helpfulness of the factors to users' permission decisions with both positive and negative messages.
Our study shows that the background access factor helps most while the grant rate helps the least. Based on the findings, we provide suggestions for system designers to enhance future systems with more permission information.
Push notifications can be a very useful feature. On web browsers, they allow users to receive timely updates even if the website is not currently open. On Chrome, the feature has become extremely popular since its inception in , but it is also the least likely to be accepted by users.
In order to preserve its utility for websites and to reduce unwanted interruptions and potential abuses for the users, we designed and tested both a novel UI and its activation mechanism for notification permission prompts in Chrome. To understand how users interact with such prompts, we conducted two large-scale studies with more than million users in the wild.
The first study showed that most of them block or ignore the prompts across all types of websites, which prompted us to rethink its UI and activation logic. We achieve these results thanks to a novel adaptive activation mechanism coupled with a block list of interrupting websites, which is derived from crowd-sourced telemetry from Chrome clients. Current mobile platforms leave it up to the app developer to decide when to request permissions timing and whether to provide explanations why and how users' private data are accessed rationales.
Given these liberties, it is important to understand how developers should use timing and rationales to effectively assist users in their permission decisions. While guidelines and recommendations for developers exist, no study has systematically investigated the actual influence of timing, rationales, and their combinations on users' decision-making process. In this work, we conducted a comparative online study with participants who were asked to interact with mockup apps drawn from a pool of variations of 30 apps.
The study design was guided by developers' current permission request practices derived from a dynamic analysis of the top apps on Google Play. Our results show that there is a clear interplay between timing and rationales on users' permission decisions and the evaluation of their decisions, making the effect of rationales stronger when shown upfront and limiting the effect of timing when rationales are present.
We therefore suggest adaptation to the available guidelines. We also find that permission decisions depend on the individuality of users, indicating that there is no one-fits-all permission request strategy, upon we suggest better individual support and outline one possible solution.
Austin, University of Toronto. We conduct a global study on the behaviors, expectations and engagement of 1, participants across 10 countries and regions towards Android application permissions. Participants were recruited using mobile advertising and used an application we designed for 30 days.
Our app samples user behaviors decisions made , rationales via in-situ surveys , expectations, and attitudes, as well as some app provided explanations. We study the grant and deny decisions our users make, and build mixed effect logistic regression models to illustrate the many factors that influence this decision making.
Among several interesting findings, we observed that users facing an unexpected permission request are more than twice as likely to deny it compared to a user who expects it, and that permission requests accompanied by an explanation have a deny rate that is roughly half the deny rate of app permission requests without explanations.
These findings remain true even when controlling for other factors. To the best of our knowledge, this may be the first study of actual privacy behavior not stated behavior for Android apps, with users using their own devices, across multiple continents. Password security hinges on an in-depth understanding of the techniques adopted by attackers.
Unfortunately, real-world adversaries resort to pragmatic guessing strategies such as dictionary attacks that are inherently difficult to model in password security studies. In order to be representative of the actual threat, dictionary attacks must be thoughtfully configured and tuned. However, this process requires a domain-knowledge and expertise that cannot be easily replicated.
The consequence of inaccurately calibrating dictionary attacks is the unreliability of password security analyses, impaired by a severe measurement bias.
In the present work, we introduce a new generation of dictionary attacks that is consistently more resilient to inadequate configurations. Requiring no supervision or domain-knowledge, this technique automatically approximates the advanced guessing strategies adopted by real-world attackers. To achieve this: 1 We use deep neural networks to model the proficiency of adversaries in building attack configurations. These mimic experts' ability to adapt their guessing strategies on the fly by incorporating knowledge on their targets.
Our techniques enable more robust and sound password strength estimates within dictionary attacks, eventually reducing overestimation in modeling real-world threats in password security. Reiter, Duke University. Known approaches for using decoy passwords honeywords to detect credential database breaches suffer from the need for a trusted component to recognize decoys when entered in login attempts, and from an attacker's ability to test stolen passwords at other sites to identify user-chosen passwords based on their reuse at those sites.
Amnesia is a framework that resolves these difficulties. Amnesia requires no secret state to detect the entry of honeywords and additionally allows a site to monitor for the entry of its decoy passwords elsewhere. We quantify the benefits of Amnesia using probabilistic model checking and the practicality of this framework through measurements of a working implementation. Password vault applications allow a user to store multiple passwords in a vault and choose a master password to encrypt the vault.
In practice, attackers may steal the storage file of the vault and further compromise all stored passwords by offline guessing the master password. Honey vaults have been proposed to address the threat. By producing plausible-looking decoy vaults for wrong master passwords, honey vaults force attackers to shift offline guessing to online verifications. However, the existing honey vault schemes all suffer from intersection attacks in the multi-leakage case where an old version of the storage file e.
The attacker can offline identify the decoys and completely break the schemes. We design a generic construction based on a multi-similar-password model and further propose an incremental update mechanism. With our mechanism, the attacker cannot get any extra advantages from the old storage, and therefore degenerates to an attacker only with knowledge of the current version.
To further evaluate the security in the traditional single-leakage case where only the current version is stolen, we investigate the theoretically optimal strategy for online verifications, and propose practical attacks. This indicates that the attackers needs to carry out 2. This paper presents Checklist, a system for private blocklist lookups. In Checklist, a client can determine whether a particular string appears on a server-held blocklist of strings, without leaking its string to the server.
Checklist is the first blocklist-lookup system that 1 leaks no information about the client's string to the server, 2 does not require the client to store the blocklist in its entirety, and 3 allows the server to respond to the client's query in time sublinear in the blocklist size.
To make this possible, we construct a new two-server private-information-retrieval protocol that is both asymptotically and concretely faster, in terms of server-side time, than those of prior work. We evaluate Checklist in the context of Google's "Safe Browsing" blocklist, which all major browsers use to prevent web clients from visiting malware-hosting URLs.
Today, lookups to this blocklist leak partial hashes of a subset of clients' visited URLs to Google's servers. We have modified Firefox to perform Safe-Browsing blocklist lookups via Checklist servers, which eliminates the leakage of partial URL hashes from the Firefox client to the blocklist servers.
This privacy gain comes at the cost of increasing communication by a factor of 3. Checklist reduces end-to-end server-side costs by 6. End-to-end encryption E2EE poses a challenge for automated detection of harmful media, such as child sexual abuse material and extremist content. The predominant approach at present, perceptual hash matching, is not viable because in E2EE a communications service cannot access user content. In this work, we explore the technical feasibility of privacy-preserving perceptual hash matching for E2EE services.
We begin by formalizing the problem space and identifying fundamental limitations for protocols. Next, we evaluate the predictive performance of common perceptual hash functions to understand privacy risks to E2EE users and contextualize errors associated with the protocols we design.
Our primary contribution is a set of constructions for privacy-preserving perceptual hash matching. We design and evaluate client-side constructions for scenarios where disclosing the set of harmful hashes is acceptable.
We then design and evaluate interactive protocols that optionally protect the hash set and do not disclose matches to users. The constructions that we propose are practical for deployment on mobile devices and introduce a limited additional risk of false negatives. Erkam Uzun, Simon P.
The explosive growth of biometrics use e. We consider private querying of a real-life biometric scan e. The querier learns only the label s of a matching scan s e. We implement it and apply it to facial search by integrating with our fine-tuned toolchain that maps face images into Hamming space. We have implemented and extensively tested our system, achieving high performance with concretely small network usage: for a 10K-row database, the query response time over WAN resp.
Our false non-matching rate is 0. In differential privacy DP , a challenging problem is to generate synthetic datasets that efficiently capture the useful information in the private data. The synthetic dataset enables any task to be done without privacy concern and modification to existing algorithms.
PrivSyn is composed of a new method to automatically and privately identify correlations in the data, and a novel method to generate sample data from a dense graphic model. We extensively evaluate different methods on multiple datasets to demonstrate the performance of our method. Local Differential Privacy LDP protocols enable an untrusted data collector to perform privacy-preserving data analytics.
In particular, each user locally perturbs its data to preserve privacy before sending it to the data collector, who aggregates the perturbed data to obtain statistics of interest. In the past several years, researchers from multiple communities—such as security, database, and theoretical computer science—have proposed many LDP protocols. These studies mainly focused on improving the utility of the LDP protocols. However, the security of LDP protocols is largely unexplored.
In this work, we aim to bridge this gap. We focus on LDP protocols for frequency estimation and heavy hitter identification , which are two basic data analytics tasks. Specifically, we show that an attacker can inject fake users into an LDP protocol and the fake users send carefully crafted data to the data collector such that the LDP protocol estimates high frequencies for arbitrary attacker-chosen items or identifies them as heavy hitters.
We call our attacks data poisoning attacks. We also explore three countermeasures against our attacks. Our experimental results show that they can effectively defend against our attacks in some scenarios but have limited effectiveness in others, highlighting the needs for new defenses against our attacks.
Secure computation is a promising privacy enhancing technology, but it is often not scalable enough for data intensive applications.
On the other hand, the use of sketches has gained popularity in data mining, because sketches often give rise to highly efficient and scalable sub-linear algorithms. It is natural to ask: what if we put secure computation and sketches together? We investigated the question and the findings are interesting: we can get security, we can get scalability, and somewhat unexpectedly, we can also get differential privacy—for free. Our study started from building a secure computation protocol based on the Flajolet-Martin FM sketches, for solving the Private Distributed Cardinality Estimation PDCE problem, which is a fundamental problem with applications ranging from crowd tracking to network monitoring.
CCS '17 is computationally expensive and not scalable enough to cope with big data applications, which prompted us to design a better protocol. The result signifies a new approach for achieving differential privacy that departs from the mainstream approach i. Free differential privacy can be achieved because of two reasons: secure computation minimizes information leakage, and the intrinsic estimation variance of the FM sketch makes the output of our protocol uncertain. We further show that the result is not just theoretical: the minimal cardinality for differential privacy to hold is only 10 2 — 10 4 for typical parameters.
Differentially private analysis of graphs is widely used for releasing statistics from sensitive graphs while still preserving user privacy. Most existing algorithms however are in a centralized privacy model, where a trusted data curator holds the entire graph.
As this model raises a number of privacy and security issues — such as, the trustworthiness of the curator and the possibility of data breaches, it is desirable to consider algorithms in a more decentralized local model where no server holds the entire graph.
In this work, we consider a local model, and present algorithms for counting subgraphs — a fundamental task for analyzing the connection patterns in a graph — with LDP Local Differential Privacy. For triangle counts, we present algorithms that use one and two rounds of interaction, and show that an additional round can significantly improve the utility. For k-star counts, we present an algorithm that achieves an order optimal estimation error in the non-interactive local model.
We provide new lower-bounds on the estimation error for general graph statistics including triangle counts and k-star counts. Finally, we perform extensive experiments on two real datasets, and show that it is indeed possible to accurately estimate subgraph counts in the local differential privacy model.
Kornegay, Morgan State University. While these bit flips are exploitable from native code, triggering them in the browser from JavaScript faces three nontrivial challenges.
First, given the lack of cache flushing instructions in JavaScript, existing eviction-based Rowhammer attacks are already slow for the older single- or double-sided variants and thus not always effective.
With many-sided Rowhammer, mounting effective attacks is even more challenging, as it requires the eviction of many different aggressor addresses from the CPU caches. Second, the most effective many-sided variants, known as n -sided, require large physically-contiguous memory regions which are not available in JavaScript. To mount effective attacks, SMASH exploits high-level knowledge of cache replacement policies to generate optimal access patterns for eviction-based many-sided Rowhammer.
To lift the requirement for large physically-contiguous memory regions, SMASH decomposes n -sided Rowhammer into multiple double-sided pairs, which we can identify using slice coloring. We demonstrate the feasibility of database reconstruction under a cache side-channel attack on SQLite. We then present several algorithms that, taken together, reconstruct nearly the exact database in varied experimental conditions, given these approximate volumes.
The time complexity of our attacks grows quickly with the size of the range of the queried attribute, but scales well to large databases. Experimental results show that we can reconstruct databases of size , and ranges of size 12 with an error percentage of 0.
Temporal memory corruptions are commonly exploited software vulnerabilities that can lead to powerful attacks. Despite significant progress made by decades of research on mitigation techniques, existing countermeasures fall short due to either limited coverage or overly high overhead. Furthermore, they require external mechanisms e. Otherwise, their protection can be bypassed or disabled. To address these limitations, we present robust points-to authentication , a novel runtime scheme for detecting all kinds of temporal memory corruptions.
PTAuth contains a customized compiler for code analysis and instrumentation and a runtime library for performing the points-to authentication as a protected program runs. PTAuth uses minimal in-memory metadata and protects its metadata without requiring spatial memory safety.
However, it remains unclear whether these techniques are susceptible to structural attacks. This paper exploits the properties of integrated circuit IC design tools, also termed electronic design automation EDA tools, to undermine the security of the CAC techniques.
Our attack can break circuits processed with any EDA tools, which is alarming because, until now, none of the EDA tools can render a secure locking solution: logic locking cannot make use of the existing EDA tools. We also provide a security property to ensure resilience against structural attacks. The commonly-used circuits can satisfy this property but only in a few cases where they cannot even defeat brute-force; thus, questions arise on the use of these circuits as benchmarks to evaluate logic locking and other security techniques.
Security architectures providing Trusted Execution Environments TEEs have been an appealing research subject for a wide range of computer systems, from low-end embedded devices to powerful cloud servers.
The goal of these architectures is to protect sensitive services in isolated execution contexts, called enclaves. Unfortunately, existing TEE solutions suffer from significant design shortcomings. First, they follow a one-size-fits-all approach offering only a single enclave type , however, different services need flexible enclaves that can adjust to their demands.
Second, they cannot efficiently support emerging applications e. Third, their protection against cache sidechannel attacks is either an afterthought or impractical, i. In this work, we propose CURE, the first security architecture, which tackles these design challenges by providing different types of enclaves: i sub-space enclaves provide vertical isolation at all execution privilege levels, ii user-space enclaves provide isolated execution to unprivileged applications, and iii self-contained enclaves allow isolated execution environments that span multiple privilege levels.
Moreover, CURE enables the exclusive assignment of system resources, e. CURE requires minimal hardware changes while significantly improving the state of the art of hardware-assisted security architectures. CURE imposes a geometric mean performance overhead of Thakur, University of California, Davis. Measured boot is an important class of boot protocols that ensure that each layer of firmware and software in a device's chain of trust is measured, and the measurements are reliably recorded for subsequent verification.
Our evaluation shows that using a fully verified implementation has minimal to no effect on the code size and boot time when compared to an existing unverified implementation. When adversaries are powerful enough to coerce users to reveal encryption keys, encryption alone becomes insufficient for data protection. Plausible deniability PD mechanisms resolve this by enabling users to hide the mere existence of sensitive data, often by providing plausible "cover texts" or "public data volumes" hosted on the same device.
Unfortunately, with the increasing prevalence of NAND flash as a high-performance cost-effective storage medium, PD becomes even more challenging in the presence of realistic adversaries who can usually access a device at multiple points in time "multi-snapshot". The problem is further compounded by the fact that this behavior is mostly proprietary. For example, in a majority of commercially-available flash devices, an issued delete or overwrite operation from the upper layers almost certainly won't result in an actual immediate erase of the underlying flash cells.
To address these challenges, we designed a new class of write-once memory WOM codes to store hidden bits in the same physical locations as other public bits. This is made possible by the inherent nature of NAND flash and the possibility of issuing multiple writes to target cells that have not previous been written to in existing pages. Kimberly J. The threat of cyber attacks is a growing concern across the world, leading to an increasing need for sophisticated cyber defense techniques.
Attackers often rely on direct observation of cyber environments. This reliance provides opportunities for defenders to affect attacker perception and behavior by plying the powerful tools of defensive cyber deception.
In this paper we analyze data from a controlled experiment designed to understand how defensive deception, both cyber and psychological, affects attackers [16]. Over professional red teamers participated in a network penetration test in which both the presence and explicit mention of deceptive defensive techniques were controlled.
While a detailed description of the experimental design and execution along with preliminary results related to red teamer characteristics has been published, it did not address any of the main hypotheses.
Granted access to the cyber and self-report data collected from the experiment, this publication begins to address theses hypotheses by investigating the effectiveness of decoy systems for cyber defense through comparison of various measures of participant forward progress across the four experimental conditions. Results presented in this paper support a new finding that the combination of the presence of decoys and providing information that deception is present has the greatest impact on cyber attack behavior, when compared to a control condition in which no deception was used.
With the ubiquity of data breaches, forgotten-about files stored in the cloud create latent privacy risks. We take a holistic approach to help users identify sensitive, unwanted files in cloud storage. We first conducted 17 qualitative interviews to characterize factors that make humans perceive a file as sensitive, useful, and worthy of either protection or deletion.
Building on our findings, we conducted a primarily quantitative online study. We showed long-term users of Google Drive or Dropbox a selection of files from their accounts. They labeled and explained these files' sensitivity, usefulness, and desired management whether they wanted to keep, delete, or protect them. For each file, we collected many metadata and content features, building a training dataset of 3, labeled files.
We then built Aletheia, which predicts a file's perceived sensitivity and usefulness, as well as its desired management. Aletheia's performance validates a human-centric approach to feature selection when using inference techniques on subjective security-related tasks. It also improves upon the state of the art in minimizing the attack surface of cloud accounts. Disinformation is proliferating on the internet, and platforms are responding by attaching warnings to content. There is little evidence, however, that these warnings help users identify or avoid disinformation.
In this work, we adapt methods and results from the information security warning literature in order to design and evaluate effective disinformation warnings. In an initial laboratory study, we used a simulated search task to examine contextual and interstitial disinformation warning designs. We found that users routinely ignore contextual warnings, but users notice interstitial warningsand respond by seeking information from alternative sources.
We then conducted a follow-on crowdworker study with eight interstitial warning designs. We confirmed a significant impact on user information-seeking behavior, and we found that a warning's design could effectively inform users or convey a risk of harm. We also found, however, that neither user comprehension nor fear of harm moderated behavioral effects.
Our work provides evidence that disinformation warnings canwhen designed wellhelp users identify and avoid disinformation. We show a path forward for designing effective warnings, and we contribute repeatable methods for evaluating behavioral effects. We also surface a possible dilemma: disinformation warnings might be able to inform users and guide behavior, but the behavioral effects might result from user experience friction, not informed decision making.
People who are involved with political campaigns face increased digital security threats from well-funded, sophisticated attackers, especially nation-states. Improving political campaign security is a vital part of protecting democracy.
To identify campaign security issues, we conducted qualitative research with 28 participants across the U. A main, overarching finding is that a unique combination of threats, constraints, and work culture lead people involved with political campaigns to use technologies from across platforms and domains in ways that leave them—and democracy—vulnerable to security attacks.
Sensitive data was kept in a plethora of personal and work accounts, with ad hoc adoption of strong passwords, two-factor authentication, encryption, and access controls.
No individual company, committee, organization, campaign, or academic institution can solve the identified problems on their own. To this end, we provide an initial understanding of this complex problem space and recommendations for how a diverse group of experts can begin working together to improve security for political campaigns. Small businesses SBs are often ill-informed and under-resourced against increasing online threats.
We found CISOs confirmed common observations that SBs are generally unprepared for online threats, and uninformed about issues such as insurance and regulation. We also found that despite perceived usability problems with language and formatting, the effectiveness of government-authored guidance a key reference source for CISOs and SBs was deemed on par with commercial resources. These observations yield recommendations for better formatting, prioritizing, and timing of security guidance for SBs, such as better tailoring checklists, investment suggestions, and scenario-based exercises.
Mazurek, University of Maryland. People are frequently required to send documents, forms, or other materials containing sensitive data e. The specific transmission mechanisms end up relying on the knowledge and preferences of the parties involved. We find that users are more likely to recognize risk to data-at-rest after receipt but not at the sender, namely, themselves. When not using an online portal provided by the recipient, participants primarily envision transmitting sensitive documents in person or via email, and have little experience using secure, privacy-preserving alternatives.
Despite recognizing general risks, participants express high privacy satisfaction and convenience with actually experienced situations.
These results suggest opportunities to design new solutions to promote securely sending sensitive materials, perhaps as new utilities within standard email workflows. Cybercrime is on the rise. Attacks by hackers, organized crime and nation-state adversaries are an economic threat for companies world-wide.
Small and medium-sized enterprises SMEs have increasingly become victims of cyberattacks in recent years. SMEs often lack the awareness and resources to deploy extensive information security measures. Many guidelines and recommendations encourage companies to invest more into their information security measures. However, there is a lack of understanding of the adoption of security measures in SMEs, their risk perception with regards to cybercrime and their experiences with cyberattacks.
We report on their experiences with cybercrime, management of information security and risk perception. We present and discuss empirical results of the adoption of both technical and organizational security measures and risk awareness in SMEs.
We find that many technical security measures and basic awareness have been deployed in the majority of companies. We uncover differences in reporting cybercrime incidences for SMEs based on their industry sector, company size and security awareness. We conclude our work with a discussion of recommendations for future research, industry and policy makers.
Safeguarding blockchain peer-to-peer P2P networks is more critical than ever in light of recent network attacks. Bitcoin has been successfully handling traditional Sybil and eclipse attacks; however, a recent Erebus attack [Tran et al.
Our large-scale evaluations of these quick patches and three similar carefully-designed protocol tweaks confirm that, unfortunately, no simple solution can effectively handle the attack. This paper focuses on a more fundamental solution called routing-aware peering or RAP , a proven silver bullet in detecting and circumventing similar network adversaries in other P2P networks.
However, we show that, contrary to our expectation, preventing the Erebus attacks with RAP is only wishful thinking. We discover that Erebus adversaries can exploit a tiny portion of route inference errors in any RAP implementations, which gives an asymmetric advantage to the network adversaries and renders all RAP approaches ineffective.
To that end, we propose an integrated defense framework that composes the available simple protocol tweaks and RAP implementation. In particular, we show that a highly customizable defense profile is required for individual Bitcoin nodes because RAP's efficacy depends significantly on where a Bitcoin node is located on the Internet topology. We present an algorithm that outputs a custom optimal defense profile that prevents most of Erebus attacks from the top large transit networks.
Meanwhile, a number of vulnerabilities and high-profile attacks against top EOSIO DApps and their smart contracts have also been discovered and observed in the wild, resulting in serious financial damages.
Most of the EOSIO smart contracts are not open-sourced and typically compiled to WebAssembly Wasm bytecode, thus making it challenging to analyze and detect the presence of possible vulnerabilities. Our framework includes a practical symbolic execution engine for Wasm, a customized library emulator for EOSIO smart contracts, and four heuristic-driven detectors to identify the presence of the four most popular vulnerabilities in EOSIO smart contracts.
We further analyze possible exploitation attempts on these vulnerable smart contracts and identify 48 in-the-wild attacks 27 of them have been confirmed by DApp developers , which have resulted in financial loss of at least 1.
Recent attacks exploiting errors in smart contract code had devastating consequences thereby questioning the benefits of this technology. It is currently highly challenging to fix errors and deploy a patched contract in time. Dating Tips. Register Login Language: English en.
Register to contact people from your country living in Germany just like you! Dating site for Expats in Germany Finding love is a challenging quest even in your home country. Online dating guide for expats Living in Germany is an incredible opportunity to rediscover and reinvent yourself, including the romantic side of your life.
Why dating for expats in Germany? Online Dating Tips for Men vs. Women Is online dating easier for single female expats in Germany than for their male counterparts?
0コメント