-
Posts
21 -
Joined
-
Last visited
Content Type
Profiles
Forums
Blogs
Everything posted by bonummaster
-
-
The Tale of Google’s Response to Reptar CPU Vulnerability YOUSIF HUSSIN CYBERSECURITY EXPERT Just as Vulnerability Research is an important area of focus at Google, so is Vulnerability Response to critical and complex vulnerabilities. Vulnerability Response at Google not only helps secure Google’s products and users, but in certain cases, it affects millions of devices across the Internet. In this post, we'll share the story of one of those cases – Google’s Response to Reptar. Let’s start with some context about the role of Google’s Vulnerability Coordination Center (VulnCC) in a response like this one. Composed of a core team of security experts, VulnCC ensures Google has a consistent, well-functioning process for vulnerability management across Alphabet. When a critical and complex security vulnerability affecting Alphabet is identified by a Google researcher, an external researcher through Google’s Vulnerability Rewards Program (VRP), or by VulnCC’s monitoring of vulnerability intelligence sources, a VulnCC team member leads the mitigation effort for the vulnerability across Alphabet. What’s Reptar? Reptar is a CPU vulnerability (CVE-2023-23583) with a CVSS [1] Base Score of 8.8 (High) that affects certain Intel CPU models. It’s an architectural vulnerability that corrupts the instruction pointer by running a sequence of x86 instructions which results in unexpected behavior and bypassing the CPU security boundaries. The vulnerability was discovered in August 2023, when a validation pipeline used by the researchers reported unexpected results under certain conditions. During testing, triggering this bug in a CPU with multiple cores caused an MCE (Machine Check Exception) [2]. Moreover, this worked from an unprivileged guest virtual machine. Exploitation of this vulnerability can cause the machine to crash and can also lead to privilege escalation and information disclosure. The vulnerability was discovered by security researchers at Google. See the dedicated blog post by Tavis Ormandy for more details about the vulnerability. For additional information, see Intel's Guidance. How Did Google Initiate the Response to Reptar? Upon the discovery of the Reptar vulnerability and its escalation to VulnCC, I (Yousif Hussin) took on the role of leading a coordinated response for it. As a first step, the vulnerability was reported to Intel, in accordance with our reporting policy, with a disclosure deadline of 90 days. From there, Google partnered and collaborated with Intel to securely share the vulnerability mitigation information with other large industry players to ensure they too could respond and protect all users globally (not only Google users). During coordination, we used the Traffic Light Protocol (TLP) [3]. The response was labeled TLP:RED, meaning “Not for disclosure, restricted to participants only”. Access to Reptar information was tightly controlled to ensure vulnerability details wouldn't be leaked. This was necessary, as a leak could be used by attackers against our users or others globally. At the outset of the response, we conducted a rapid Google-wide impact assessment. We developed a response plan and assembled a response team with roles clearly assigned. We actively shared status updates with Google’s executives throughout the vulnerability response effort. A key objective of the response was to create and execute a mitigation strategy, and ensure its timely deployment across critical areas of Google before the end of the embargo period, all while minimizing the chances of prematurely leaking any information regarding the vulnerability. Now let's take a closer look at the different phases of the response. Reptar Vulnerability Response Milestones Timeline The timeline below highlights key vulnerability response milestones in relation to the embargo period. Fig. 1. Reptar Vulnerability Response Timeline (2023) The Google-Wide Impact Assessment for Reptar To perform an effective impact assessment for Reptar, it was essential to have a thorough understanding of the vulnerability, how it’s exploited, its attack surface, and how Google's products and systems operate. In the case of Reptar, we assessed the impact across Google to identify affected systems. The key affected areas were: Of the affected areas above, Reptar poses the highest risk to Google Cloud. This is due to the possible attack strategy where an external attacker could bypass the CPU security controls in a multi-tenant Google Compute Engine environment. This could impact the host machine and subsequently other services or virtual machines running on the same host. In turn, this could cause a denial of service on victims’ (other customers’) virtual machines & services by causing an MCE crash on the host machine (see graphic below for an example). The vulnerability could also lead to privilege escalation and information disclosure. Fig. 2. Cloud Reptar Exploit (VM DoS) The Reptar Vulnerability Response Team When the impact assessment identified the affected products and systems, the scale of the response became clear and area-specific response leads joined the response team. When issues, such as this one, require extensive coordination, it can help significantly to use an Incident Management structure to handle them even if they have not caused an incident. At Google, we use the Incident Management at Google (IMAG) framework, which is used in Reptar's response. IMAG is based on the Incident Command System (ICS) used by firefighters and medics, and it teaches how to utilize an Incident Commander (IC) to organize an emergency response by establishing a hierarchical structure with clear roles, tasks and communication responsibilities methods. In the case of Reptar, I took on the role of the IC and structured the team as shown below and led the team throughout the response. With respect to team communications, the response team created chat-channels. A general one for the entire response team and others for each function (operations, GCE, communications, etc.) each with the relevant individuals. Fig. 3. Reptar Vulnerability Response Team Reptar Mitigation Strategy for Key Products For Google Cloud, two primary mitigation solutions were proposed by the response team: We internally identified and tested a chicken-bit mitigation. A chicken-bit is jargon for a chip hardware configuration setting that can be used to disable a certain feature on the chip. The chicken-bit can be set in the MSR [7] CPU register to disable a certain feature on the chip. The vulnerability is exploitable in CPUs with Fast-Strings feature enabled. So, disabling this feature via the MSR mitigates the vulnerability risk. However, the chicken-bit mitigation caused a significant performance impact, therefore it was discarded as a viable option. After Intel provided a candidate for a long-term microcode solution to Google and other participating companies, it was extensively tested in their environments and proven to remediate the risk without causing an unacceptable impact. With this verification, Intel promoted the microcode to Production-Validated (PV) status. In general, the microcode update is the superior solution compared to the chicken-bit. This is because the microcode update is the official update supported by the vendor, and the fix has undergone extensive testing not only by Google and Intel, but also by other major industry partners. We also considered and compared the various microcode deployment approaches of the long-term microcode fix. The options were: Through thorough testing and comparison of the approaches, the team concluded that non-disruptively hotloading the microcode was the best course of action. A microcode rollout plan was developed, and a suite of monitoring tools were configured to ensure systems were carefully monitored for potential anomalies throughout the rollout. In the case of ChromeOS, the Intel-provided microcode fix was tested and incorporated into a forthcoming ChromeOS update scheduled to be made available to ChromeOS customers prior to the vulnerability disclosure date. Execution of the Reptar Mitigation Each area lead customized the microcode rollout for their respective environments. Subsequently, the non-disruptive hotload server microcode rollout was executed Google-wide across the entire fleet (Google Cloud + Borg Infrastructure). The mitigation was successfully and transparently deployed, which was a positive development as it was completed without impact to the user or customer. As for the ChromeOS mitigation, the fix update was published for client devices as planned before the disclosure date. In light of the smooth rollout without unexpected issues, and the speed at which it was completed, Reptar's mitigation experience across Google was further evidence that Google engineers maintain an infrastructure that can successfully and quickly deploy mitigations at Google's scale with ease. Reptar Exploitation Detection When leading a vulnerability response, we assess whether attempted exploitation can be detected and whether there’s evidence of exploitation attempts at Google. Typically, CPU vulnerabilities are harder to detect than other types of vulnerabilities by traditional host monitoring tools, but still there are usually some opportunities to provide visibility. To identify detection opportunities, the response team will work with partner security teams to develop signals to detect exploitation of the vulnerability. In the case of Reptar, researchers, in practice, have only been able to demonstrate a DoS attack so far, which is readily detectable by standard monitoring tools. Response Communications Communication is a critical component of the response. For example, once a vulnerability is disclosed, Google customers may have inquiries regarding the vulnerability, its mitigation and potential impact. In such a case, answers to anticipated questions should be documented in a Frequently Asked Questions (FAQ) document and then made available to customer-facing engineers to help respond to customer inquiries. For Reptar, while some of the technical teams were engaged in mitigation activities, the communication leads ensured communication artifacts were developed. These artifacts included the Security Bulletin and an FAQ document. In addition to communication artifacts, Communication Leads established channels to facilitate any escalations that resulted from the vulnerability response. This included external escalations from customers and internal escalations from Googlers. Finally, Reptar Disclosure Day Upon Intel’s disclosure of the vulnerability, Google published a Security Bulletin. We ensured that monitoring for escalations was in place and that the team was prepared to respond as necessary; however, we were fortunate enough to have not needed to use those escalation channels. This marked the successful completion of the Reptar vulnerability response. Vulnerability Response is a critical and rapidly-developing field, which is why Google has been investing in Vulnerability Response as a unique discipline. While the discovery and response to Reptar demonstrates Google's ability to not only protect its own users from critical security threats, but computer users around the world, every new vulnerability provides an opportunity to further refine the response process. Google’s response to Reptar served as evidence that a well-orchestrated vulnerability response to a critical vulnerability like this one, which includes successful internal and external collaboration, is very important for protecting the Internet as a whole. References [1] The Common Vulnerability Scoring System (CVSS) provides a way to capture the principal characteristics of a vulnerability and produce a numerical score reflecting its severity. [2] Machine Check Exception (MCE) is a type of hardware error that occurs when a CPU detects a hardware problem. [3] Traffic Light Protocol (TLP) is a set of designations used to ensure that sensitive information is shared only with the appropriate audience. [4] Google Compute Engine (GCE) is a secure and customizable compute service that lets you create and run virtual machines on Google’s infrastructure. [5] Borg is Google’s cluster management system, designed to manage jobs and machine resources on a massive scale. [6] ChromeOS is the speedy, simple and secure operating system that powers every Chromebook. [7] Model-Specific Register (MSR) is any of various control registers in the x86 system architecture used for toggling certain CPU features.
-
devos50/qemu-ios: A QEMU emulator for legacy Apple devices https://github.com/devos50/qemu-ios devos50/qemu-ios A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch? Make qemu-ios easier to compile on Microsoft Windows QEMU-iOS QEMU-iOS is an emulator for legacy Apple devices. Currently, the iPod Touch 1G and iPod Touch 2G are supported. Running the iPod Touch 1G Instructions on how to run the iPod Touch 1G emulator can be found here. A technical blog post with more information about the peripherals and reverse engineering process is published here. Running the iPod Touch 2G Instructions on how to run the iPod Touch 2G emulator can be found here. About A QEMU emulator for legacy Apple devices Resources License
-
Trains were designed to break down after third-party repairs, hackers find BY ASHLEY BELANGER - CST Dragon Sector uploaded a video to social media after discovering an "undocumented ‘unlock code’ which you could enter from the train driver’s panel" fixed "mysterious issues" impacting trains in Poland. An unusual right-to-repair drama is disrupting railroad travel in Poland despite efforts by hackers who helped repair trains that allegedly were designed to stop functioning when serviced by anyone but Newag, the train manufacturer. Members of an ethical hacking group called Dragon Sector, including Sergiusz Bazański and Michał Kowalczyk, were called upon by a train repair shop, Serwis Pojazdów Szynowych (SPS), to analyze train software in June 2022. SPS was desperate to figure out what was causing "mysterious failures" that shut down several vehicles owned by Polish train operator the Lower Silesian Railway, Polish infrastructure trade publication Rynek Kolejowy reported. At that point, the shortage of trains had already become "a serious problem" for carriers and passengers, as fewer available cars meant shorter trains and reduced rider capacity, Rynek Kolejowy reported. Dragon Sector spent two months analyzing the software, finding that "the manufacturer's interference" led to "forced failures and to the fact that the trains did not start," and concluding that bricking the trains "was a deliberate action on Newag's part." According to Dragon Sector, Newag entered code into the control systems of Impuls trains to stop them from operating if a GPS tracker indicated that the train was parked for several days at an independent repair shop. The trains "were given the logic that they would not move if they were parked in a specific location in Poland, and these locations were the service hall of SPS and the halls of other similar companies in the industry," Dragon Sector's team alleged. "Even one of the SPS halls, which was still under construction, was included." The code also allegedly bricked the train if "certain components had been replaced without a manufacturer-approved serial number," 404 Media reported. In a statement, Newag denied developing any so-called "workshop-detection" software that caused "intentional failures" and threatened to sue Dragon Sector for slander and for violating hacking laws. “Hacking IT systems is a violation of many legal provisions and a threat to railway traffic safety,” Newag said, insisting that the hacked trains be removed from use because they now pose alleged safety risks. Newag's safety claims are still unsubstantiated, 404 Media reported. "We categorically deny and negate Newag's uploading of any functionality in vehicle control systems that limits or prevents the proper operation of vehicles, as well as limiting the group of entities that can provide maintenance or repair services," Newag's statement said. According to Newag, Dragon Sector's report shouldn't be trusted because it was commissioned by one of Newag's biggest competitors. Dragon Sector maintains that the evidence supports its conclusions. Bazański posted on Mastodon that “these trains were locking up for arbitrary reasons after being serviced at third-party workshops. The manufacturer argued that this was because of malpractice by these workshops, and that they should be serviced by them instead of third parties." In some cases, Bazański wrote, Newag "appeared to be able to lock the train remotely.” Newag has said that "any remote intervention" is "virtually impossible." Lawsuit threats fails to silence hackers Dragon Sector got the trains running after discovering "an undocumented ‘unlock code’ which you could enter from the train driver’s panel which magically fixed the issue," Dragon Sector's team told 404 Media. Newag has maintained that it has never and will never "introduce into the software of our trains any solutions that lead to intentional failures." "We do not know who interfered with the train control software, using what methods and what qualifications," Newag said. "We also notified the Office of Rail Transport about this so that it could decide to withdraw from service the sets subjected to the activities of unknown hackers." Dragon Sector and SPS have denied interfering with the train's control systems. While Newag has contacted authorities to investigate the hacking, Janusz Cieszyński, Poland’s former minister of digital affairs, posted on X that the evidence appears to weigh against Newag. "The president of Newag contacted me," Cieszyński wrote. "He claims that Newag fell victim to cybercriminals and it was not an intentional action by the company. The analysis I saw indicated something else, but for the sake of clarity, I will write about everything. Newag president Zbigniew Konieczek said that "no evidence was provided that our company intentionally installed the faulty software. In our opinion, the truth may be completely different—that, for example, the competition interfered with the software." Konieczek also accused Cieszyński of disseminating "false and highly harmful information about Newag." 404 Media noted that Newag appeared to be following a common playbook in the right-to-repair world where manufacturers intimidate competitor repair shops with threatened lawsuits and unsubstantiated claims about safety risks of third-party repairs. So far, Dragon Sector does not appear intimidated, posting its success on YouTube and discussing its findings at Poland’s Oh My H@ck conference in Warsaw. The group is also planning "a more detailed presentation" for the 37th Chaos Communication Congress in Hamburg, Germany, at the end of December, The Register reported. Because of the evidence gathered during their analysis, the Dragon Sector team has doubts about whether Newag will actually follow through with the lawsuit. "Their defense line is really poor, and they would have no chance defending it," Kowalczk told 404 Media. "They probably just want to sound scary in the media.”
-
Oldest fortresses in the world discovered Top: aerial view of the Amnya river and promontory; bottom: general plan of Amnya I and II, showing location of excavation trenches and features visible in the surface relief. Credit: Illustration by N. Golovanov, S. Krubeck and S. Juncker/Antiquity (2023). DOI: 10.15184/aqy.2023.164 In a groundbreaking archaeological discovery, an international team led by archaeologists from Freie Universität Berlin has uncovered fortified prehistoric settlements in a remote region of Siberia. The results of their research reveal that hunter–gatherers in Siberia constructed complex defense structures around their settlements 8,000 years ago. This finding reshapes our understanding of early human societies, challenging the idea that only with the advent of agriculture would people have started to build permanent settlements with monumental architecture and developed complex social structures. The study, "The World's Oldest-Known Promontory Fort: Amnya and the Acceleration of Hunter-Gatherer Diversity in Siberia 8000 Years Ago," was published in the journal Antiquity at the beginning of December. The investigation centered on the fortified settlement of Amnya, acknowledged as the northernmost Stone Age fort in Eurasia, where the team of researchers conducted fieldwork in 2019. The group was led by Professor Henny Piezonka, archaeologist at Freie Universität Berlin, and Dr. Natalia Chairkina, archaeologist in Yekaterinburg, Russia. Among the team's members were German and Russian researchers from Berlin, Kiel, and Yekaterinburg. Tanja Schreiber, archaeologist at the Institute of Prehistoric Archaeology in Berlin and co-author of the study, explains, "Through detailed archaeological examinations at Amnya, we collected samples for radiocarbon dating, confirming the prehistoric age of the site and establishing it as the world's oldest-known fort. Our new palaeobotanical and stratigraphical examinations reveal that inhabitants of Western Siberia led a sophisticated lifestyle based on the abundant resources of the taiga environment." The prehistoric inhabitants caught fish from the Amnya River and hunted elk and reindeer using bone and stone-tipped spears. To preserve their surplus of fish oil and meat, they crafted elaborately decorated pottery. Approximately 10 Stone Age fortified sites are known to date, with pit houses and surrounded by earthen walls and wooden palisades, suggesting advanced architectural and defensive capabilities. This discovery challenges the traditional view that permanent settlements, accompanied by defensive structures, only emerged with farming societies, thus disproving the notion that agriculture and animal husbandry were prerequisites for societal complexity. The Siberian findings, along with other global examples like Gobekli Tepe in Anatolia, contribute to a broader reassessment of evolutionist notions that suggest a linear development of societies from simple to complex. In various parts of the world, from the Korean peninsula to Scandinavia, hunter-gatherer communities developed large settlements by drawing on aquatic resources. The abundance of natural resources in the Siberian taiga, such as annual fish runs and migrating herds, probably played a crucial role in the emergence of the hunter–gatherer forts. The fortified settlements overlooking rivers may have served as strategic locations to control and exploit productive fishing spots. The competitive nature arising from the storage of resources and increased populations is evident in these prehistoric constructions, overturning previous assumptions that competition and conflict were absent in hunter–gatherer societies. The findings underscore the diversity of pathways that led to complex societal organizations, reflected in the emergence of monumental constructions such as the Siberian forts. They also highlight the significance of local environmental conditions in shaping the trajectories of human societies. More information: Henny Piezonka et al, The world's oldest-known promontory fort: Amnya and the acceleration of hunter-gatherer diversity in Siberia 8000 years ago, Antiquity (2023). DOI: 10.15184/aqy.2023.164 Journal information: Antiquity Provided by Free University of Berlin
-
Hackers (1995) | Sci-fi interfaces BY SCIFIHUGHF Our third film is from 1995, directed by Iain Softley. Hackers is about a group of teenage computer hackers, of the ethical / playful type who are driven by curiosity and cause no harm — well, not to anyone who doesn’t deserve it. One of these hackers breaks into the “Gibson” computer system of a high profile company and partially downloads what he thinks is an unimportant file as proof of his success. However this file is actually a disguised worm program, created by the company’s own chief of computer security to defraud the company of millions of dollars. The security chief tries to frame the hackers for various computer crimes to cover his tracks, so the hackers must break back into the system to download the full worm program and reveal the true culprit. The film was made in the time before Facebook when it was common to have an online identity, or at least an online handle (nick), distinct from the real world. Our teenage hacker protagonists are: As hackers they don’t have a corporate budget, so use a variety of personal computers rather than the expensive SGI workstations we saw in the previous films. And since it’s the 1990s, their network connections are made with modems over the analog phone system and important files will fit on 1.44 megabyte floppy disks. The Gibson, though, is described as “big iron”, a corporate supercomputer. Again this was the 1990s when a supercomputer would be a single very big and very expensive computer, not thousands of PC CPUs and GPUs jammed into racks as in the early 21st C. A befitting such an advanced piece of technology it has a three dimensional file browsing interface which is on display both times the Gibson is hacked. First run The first hack starts at about 24 minutes. Junior hacker Joey has been challenged by his friends to break into something important such as a Gibson. The scene starts with Joey sitting in front of his Macintosh personal computer and reviewing a list of what appear to be logon or network names and phone numbers. The camera flies through a stylised cyberspace representation of the computer network, the city streets, then the physical rooms of the target company (which we will learn is Ellingson Minerals), and finally past a computer operator sitting at a desk in the server room and into the 3D file system. This single “shot” actually switches a few times between the digital and real worlds, a stylistic choice repeated throughout the film. Although never named in the film this file system is the “City of Text” according to the closing credits. Joey looks down on the City of Text. Hackers (1995) The file system is represented as a virtual cityscape of skyscraper-like blocks. The ground plane looks like a printed circuit board with purple traces (lines). The towers are simple box shapes, all the same size, as if constructed from blue tinted glass or acrylic plastic. Each of the four sides and the top shows a column of text in white lettering, apparently the names of directories or files. Because the tower sides are transparent the reverse facing text on the far sides is also visible, cluttering the display. This 3D file system is the most dynamic of those in this review. Joey flies among the towers rather than walking, with exaggerated banking and tilting as he turns and dives. At ground level we can see some simple line graphics at the left as well as text. Joey flies through the City of Text, banking as he changes direction. Hackers (1995) The city of text is even busier due to animation effects. Highlight bars move up and down the text lists on some panes. Occasionally a list is cleared and redrawn top to bottom, while others cycle between two sets of text. White pulses flow along the purple ground lanes and fly between the towers. These animations do not seem to be interface elements. They could be an indicator of overall activity with more pulses per second meaning more data being accessed, like the blinking LED on your Ethernet port or disk drive. Or they could be a screensaver, as it was important on the CRT displays of the 1990s to not display a static image for long periods as it would “burn in” and become permanent. Next there is a very important camera move, at least for analysing the user interface. So far the presentation has been fullscreen and obviously artificial. Now the camera pulls back slightly to show that this City of Text is what Joey is seeing on the screen of his Macintosh computer. Other shots later in the film will make it clear that this is truly interactive, he is the one controlling the viewpoint. Joey looks at a particular list of directories/files on one face of a skyscraper. Hackers (1995) I’ll discuss how this might work later in the analysis section. For now it’s enough to remember that this is a true file browser, the 3D equivalent of the Macintosh Finder or Windows File Explorer. While Joey is exploring, we cut to the company server room. This unusual activity has triggered an alarm so the computer operator telephones the company security chief at home. At this stage we don’t know that he’s evil, but he does demand to be addressed by his hacker handle “The Plague” which doesn’t inspire confidence. (The alarm itself shows that a superuser / root / administrator account is in use by displaying the password for everyone to see on a giant screen. But we’re not going to talk about that.) Joey wants to prove he has hacked the Gibson by downloading a file, but by the ethics of the group it shouldn’t be something valuable. He selects what he thinks will be harmless, the garbage or trash directory on a particular tower. It’s not very clear but there is another column of text to the right which is dimmed out. Joey selects the GARBAGE directory and a list of contents appears. Hackers (1995) There’s a triangle to the right of the GARBAGE label indicating that it is a directory, and when selected a second column of text shows the files within it. When Joey selects one of these the system displays what today would be called a Live Tile in Windows, or File Preview in the Mac Finder. But in this advanced system it’s an elaborate animation of graphics and mathematical notation. Joey decides this is the file he wants and starts a download. Since he’s dialled in through an old analog phone modem, this is a slow process and will eventually be interrupted when Joey’s mother switches his Macintosh off to force him to get some sleep. Joey looks at the animation representing the file he has chosen. Hackers (1995) Physical View Back in the server room of Ellingson Minerals and while Joey is still searching, the security chief AKA “The Plague” arrives. And here we clearly see that there is also a physical 3D representation of the file system. The Plague makes a dramatic entrance into the physical City of Text. Hackers (1995) Just like the virtual display it is made up of rectangular towers made of blue tinted glass or plastic, arranged on a grid pattern like city skyscrapers. Each is about 3 metres high and about 50cm wide and deep. Again matching the virtual display, there is white text on all the visible sides, being updated and highlighted. However there is one noticeable difference, the bottom of each tower is solid black. What are the towers for? Hackers is from 1995, when hard drives and networked file servers were shoebox- to pizza-box-sized, so one or two would fit into the base of each tower. The physical displays could be just blinkenlights, an impressive but not particularly useful visual display, but in a later shot there’s a technician in the background looking at one of the towers and making notes on a pad, so they are intended to show something useful. My assumption is that each tower displays information about the actual files being stored inside, mirroring the virtual city of text shown online. When he reaches the operator’s desk, The Plague switches the big wall display to the same 3D virtual file system. The Plague on the left and the night shift operator watch what Joey is doing on a giant wall screen. Hackers (1995) He uses an “echo terminal” command to see exactly what Joey is doing, so sees the same garbage directory and that the file is being copied. We’ll later learn that this seemingly harmless file is actually the worm program created by The Plague, and that discovering it had been copied was a severe shock. Here he arranges for the phone connection to be traced and Joey questioned by his government friends in the US Secret Service (which at the time was responsible for investigating some computer security incidents and crimes), setting in motion the main plot elements. Tagged: animated, architecture, big screens, busted!, control room, cyan, doorway, drama, eavesdropping, emergency, flashing, flying, glow, hacking, industrial espionage, labeling, monitoring, navigating, orange, purple, security, surveillance, terminal, translucency, translucent display, wall interface Second run After various twists and turns our teenage hackers are resolved to hack into the Gibson again to obtain a full copy of the worm program which will prove their innocence. But they also know that The Plague knows they know about the worm, Ellingson Minerals is alerted, and the US Secret Service are watching them. This second hacking run starts at about 1 hour 20 minutes. The first step is to evade the secret service agents by a combination of rollerblading and hacking the traffic lights. (Scenes like this are why I enjoy the film so much.) Four of our laptop-wielding hackers dial in through public phone booths. The plan is that Crash will look for the file while Acid, Nikon, and Joey will distract the security systems, and they are expecting additional hacker help from around the world. We see a repeat of the earlier shot flying through the streets and building into the City of Text, although this time on Crash’s Macintosh Powerbook. Crash enters the City of Text. Hackers (1995) It seems busier with many more pulses travelling back and forth between towers, presumably because this is during a workday. The other three start launching malware attacks on the Gibson. Since the hacking attempt has been anticipated, The Plague is in the building and arrives almost immediately. The Plague walks through the physical City of Text as the attack begins. Hackers (1995) The physical tower display now shows a couple of blocks with red sides. This could indicate the presence of malware, or just that those sections of the file system are imposing a heavy CPU or IO load due to the malware attacks. This time The Plague is assisted by a full team of technicians. He primarily uses a “System Command Shell” within a larger display that presumably shows processor and memory usage. It’s not the file system, but has a similar design style and is too cool not to show: The Plague views system operations on a giant screen, components under attack highlighted in red on the right. Hackers (1995) Most of the shots show the malware effects and The Plague responding, but Crash is searching for the worm. His City of Text towers show various “garbage” directories highlighted in purple, one after the other. Crash checks the first garbage directory, in purple. Other possible matches in cyan on towers to the right. Hackers (1995) What’s happening here? Most likely Crash has typed in a search wildcard string and the file browser is showing the matching files and folders. Why are there multiple garbage directories? Our desktop GUIs always show a single trashcan, but under the hood there is more than one. A multiuser system needs at least one per user, because otherwise files deleted by Very Important People working with Very Sensitive Information would be visible, or at least the file names visible, to everyone else. Portable storage devices, floppy disks in Hackers and USB drives today, need their own trashcan because the user might still expect to be able to undelete files even if it has been moved to another computer. For the same reason a networked drive needs its own trashcan that isn’t stored on the connecting computer. So Crash really does have to search for the right garbage directory in this giant system. As hackers from around the world join in, the malware effects intensify. More tower faces, both physical and digital, are red. The entire color palette of the City of Text becomes darker. Crash flies through the City of Text, a skyscraper under siege. Hackers (1995) This could be an automatic effect when the Gibson system performance drops below some threshold, or activated by the security team as the digital equivalent of traffic cones around a door. Anyone familiar with the normal appearance of the City of Text can see at a glance that something is wrong and, presumably, that they should log out or at least not try to do anything important. Crash finds the right file and starts downloading, but The Plague hasn’t been fully distracted and uses his System Command Shell to disconnect Crash’s laptop entirely. Rather than log back in, Crash tells Joey to download the worm and gives him the full path to the correct garbage directory, which for the curious is root/.workspace/.garbage (the periods are significant, meaning these names should not normally be displayed to non-technical users). We don’t see how Joey enters this into the file browser but there is no reason it should be difficult. Macintosh Finder windows have a clickable text search box, and both the Ubuntu Desktop Shell and Microsoft Windows start screen will automatically start searching for files and folders that match any typed text. Joey downloads the worm, this time all of it. The combined malware attacks crash The Gibson. Unfortunately the secret service agents arrive just in time to arrest them, but all ends well with The Plague being exposed and arrested and our hacker protagonists released. Tagged: 3D rendering, animation, architecture, big screens, blue, bright is more, call to action, color cue, command and control, control room, crisis, cyan, dark, defense, flashing, flowing, flying, glow, hacking, industrial espionage, keyboard, mission, motion cue, navigating, nerdsourcing, personal computer, red, red is warning, search, search, status indicator, threshold alert, translucency, translucent display, trap, trash, wall mounted, yellow Analysis How believable is the interface? The City of Text has two key differences from the other 3D file browsers we’ve seen so far. It must operate over a network connection, specifically over a phone modem connection, which in the 1990s would be much slower than any Ethernet LAN. This 3D view is being rendered on personal computers, not specialised 3D workstations. Despite these constraints, the City of Text remains reasonably plausible. Would the City of Text require more bandwidth than was available? What effect would we expect from a slow network connection? It’s a problem when copying files, upload or download, but much less so for browsing a file system. The information being passed from the Gibson to the 3D file browser is just a list of names in each directory and a minimal set of attributes for each, not the file contents. In 1995 2D file browsers on personal computers were already showing icons, small raster images, for each file which took up more memory than the file names. The City of Text doesn’t, so the file data would certainly fit in the bandwidth available. The flying viewpoint doesn’t require much bandwidth either. There is no avatar or other representation of the user, just an abstract viewpoint. Only 9 numbers are needed to describe where you are and what you’re looking at in 3D space, and predictive techniques developed for games and simulations can reduce the network bandwidth required even more. Networked file systems and file browsers already existed in 1995, for example FTP and Gopher, although with pure text interfaces rather than 3D or even 2D graphics. The only missing component would be the 3D viewpoint coordinates. PCs in the 1990s, especially laptops, rarely had any kind of 3D graphics acceleration and would not have been able to run the Jurassic Park or Disclosure 3D file browsers. The City of Text, though, is much less technically demanding even though it displays many more file and folder names. Notice that there is no hidden surface removal, where the front sides of a 3D object hide those that are further away. There’s no lighting, with everything rendered in flat colors that don’t depend on the direction of the sun or other light sources, and no shadows. There are no images or textures, just straight lines and plain text. And finally everything is laid out on an axis-aligned grid; meaning all the graphics are straight up/down, left/right, or forwards/back; and all the towers and text are the same size. Similar shortcuts were used in 1990s PC games and demo scene animations, such as the original Doom in which players could look from side to side but not up or down. I’m not saying that the City of Text on a 1990s PC or laptop would be easy, especially on Joey’s Macintosh LC, but it is plausible. Alas the worm animation shown when that particular file is selected is not possible. We see fractal graphics and mathematical notation in 3D, and it’s a full screen image rather than a simple file icon. Whether it’s a pre-rendered animation or being generated on the fly there’s way too much to push through a modem connection, even though at the time “full screen” meant a lot less pixels than now in the 21st C. The physical towers were also not possible. Three metre high flat screen displays didn’t exist in 1995, and I don’t see how that many projectors could be installed in the ceiling without interfering with each other. How well does the interface inform the narrative of the story? Hackers is a film all about computers and the people who work with them, and therefore must solve the problem (which still exists today) of making what is happening visible and understandable to a non-technical audience. Director Iain Softley said he wanted a metaphorical representation of how the characters perceived the digital world, not a realistic one. Some scenes use stylised 2D graphics and compositing to create a psychedelic look, while the 3D file browser is meant to be a virtual equivalent to the physical city of New York where Hackers is set. At least for some viewers, myself included, it works. The worm animation also works well. Joey is looking for an interesting file, a trophy, and the animation makes it clear that this is indeed an extraordinary file without needing to show the code. The physical towers, though, are rather silly. The City of Text is meant to be metaphorical, a mental landscape created by hackers, so we don’t need a physical version. How well does the interface equip the character to achieve their goals? The City of Text is very well suited to the character goals, because they are exploring the digital world. Looking cool and having fun are what’s important, not being efficient. Now if you’ll excuse me, I have a rollerblading lesson before the next review…
-
Reclaiming the Web with a Personal Reader FACUNDO OLANO Background Last year I experienced the all-too-common career burnout. I had a couple of bad projects in a row, yes, but more generally I was disillusioned with the software industry. There seemed to be a disconnection between what I used to like about the job, what I was good at, and what the market wanted to buy from me. I did the usual thing: I slowed down, quit my job, started therapy. I revised my habits: eat better, exercise, meditate. I tried to stay away from programming and software-related reading for a while. Because I didn’t like the effect it had on me, but also encouraged by its apparent enshittification, I quit Twitter, the last social media outlet I was still plugged into. Not working was one thing, but overcoming the productivity mandate —the feeling that I had to make the best of my time off, that I was “recharging” to make a comeback— was another. As part of this detox period, I read How to Do Nothing, a sort of artistic manifesto disguised as a self-help book that deals with some of these issues. The author Jenny Odell mentions Mastodon when discussing alternative online communities. I had heard about Mastodon, I had seen some colleagues move over there, but never really looked at it. I thought I should give it a try. ∗ ∗ ∗ I noticed a few things after a while on Mastodon. First, it felt refreshing to be back in control of my feed, to receive strictly chronological updates instead of having an algorithmic middleman trying to sell me stuff. Second, many people were going through a similar process as mine, one of discomfort with the software industry and the Web. Some of them were looking back at the old times for inspiration: playing with RSS, Bulletin Board Systems, digital gardens, and webrings; some imagined a more open and independent Web for the future. Third, not only wasn’t I interested in micro-blogging myself, but I didn’t care for most of the updates from the people I was following. I realized that I had been using Twitter, and now Mastodon, as an information hub rather than a social network. I was following people just to get notified when they blogged on their websites; I was following bots to get content from link aggregators. Mastodon wasn’t the right tool for that job. Things clicked for me when I learned about the IndieWeb movement, particularly their notion of social readers. Trying Mastodon had been nice, but what I needed to reconnect with the good side of the Web was a feed reader, one I could adjust arbitrarily to my preferences. There are plenty of great RSS readers out there, and I did briefly try a few, but I knew this was the perfect excuse for me to get back in touch with software development: I was going to build my own personal reader. Goals As a user, I had some ideas of what I wanted from this project. Rather than the email inbox metaphor I’ve most commonly seen in RSS readers, I wanted something resembling the Twitter and Mastodon home feed. That is: instead of a backlog to clear every day, a stream of interesting content whenever I opened the app. The feed would include articles from blogs, magazines, news sites, and link aggregators, mixed with notifications from personal accounts (Mastodon, Goodreads, GitHub). The parsing should be customizable to ensure a consistent look and feel, independent of the shape of the data each source published. I wasn’t interested in implementing the “social” features of a fully-fledged indie reader. I didn’t mind opening another tab to comment, nor having my “content” scattered across third-party sites. That’s what I started with but, once I had the basic functionality in place, I planned to drive development by what felt right and what felt missing as a user. My short-term goal was to answer, as soon as possible, this question: could this eventually become my primary —even my sole— source of information on the Web? If not, I’d drop the project right away. Otherwise, I could move on to whatever was missing to realize that vision. ∗ ∗ ∗ As a developer, I wanted to test some of the ideas I’d been ruminating on for over a year. Although I hadn’t yet formulated it in those terms, I wanted to apply what I expressed in another post as: user > ops > dev. This meant that, when prioritizing tasks or making design trade-offs, I would choose ease of operation over development convenience, and I would put user experience above everything else. Since this was going to be an app for personal use, and I had no intention of turning it into anything else, putting the user first just meant dogfooding: putting my “user self” —my needs— first. Even if I eventually wanted other people to try the app, I presumed that I had a better chance of making something useful by designing it ergonomically for me, than by trying to satisfy some ideal user. It was very important to me that this didn’t turn into a learning project or, worse, a portfolio project. This wasn’t about productivity: it was about reconnecting with the joy of software development; the pleasure wouldn’t come from building something but from using something I had built. Assuming myself as the single target audience meant that I could postpone whatever I didn’t need right away (e.g. user authentication), that I could tackle overly-specific features early on (e.g. send to Kindle), that I could assume programming knowledge for some scenarios (e.g. feed parser customization, Mastodon login), etc. Design Given that mental framework, I needed to make some initial technical decisions. User Interface Although this was going to be a personal tool, and I wanted it to work on a local-first setup, I knew that if it worked well I’d want to access it from my phone, in addition to my laptop. This meant that it needed to be a Web application: I wanted the Web UI to be somewhat dynamic, but I didn’t intend to build a separate front-end application, learn a new front-end framework, or re-invent what the browser already provided. Following the boring tech and radical simplicity advice, I looked for server-side rendering libraries. I ended up using a mix of htmx and its companion hyperscript, which felt like picking Web development up where I’d left off over a decade ago. Architecture Making the app ops-friendly meant not only that I wanted it to be easy to deploy, but easy to set up locally, with minimal infrastructure —not assuming Docker, Nix, etc. A “proper” IndieWeb reader, at least as described by Aaron Parecki, needs to be separated into components, each implementing a different protocol (Micropub, Microsub, Webmentions, etc.). This setup enforces a separation of concerns between content fetching, parsing, displaying, and publishing. I felt that, in my case, such architecture would complicate development and operations without buying me much as a user. Since I was doing all the development myself, I preferred to build a monolithic Web application. I chose SQLite for the database, which meant one less component to install and configure. In addition to the Web server, I needed some way to periodically poll the feeds for content. The simplest option would have been a cron job, but that seemed inconvenient, at least for the local setup. I had used task runners like Celery in the past, but that required adding a couple of extra components: a consumer process to run alongside the app and something like Redis to act as a broker. Could I get away with running background tasks in the same process as the application? That largely depended on the runtime of the language. Programming language At least from my superficial understanding of it, Go seemed like the best fit for this project: a simple, general-purpose language, garbage-collected but fast enough, with a solid concurrency model and, most importantly for my requirements, one that produced easy-to-deploy binaries. (I later read a similar case for Golang from the Miniflux author). The big problem was that I’d never written a line of Go, and while I understood it’s a fairly accessible language to pick up, I didn’t want to lose focus by turning this into a learning project. Among the languages I was already fluent in, I needed to choose the one I expected to be most productive with, the one that let me build a prototype to decide whether this project was worth pursuing. So I chose Python. The bad side of using Python was that I had to deal with its environment and dependency quirks, particularly its reliance on the host OS libraries. Additionally, it meant I’d have to get creative if I wanted to avoid extra components for the periodic tasks. (After some research I ended up choosing gevent and an extension of the Huey library to run them inside the application process). The good side was that I got to use great Python libraries for HTTP, feed parsing, and scraping. Testing (or lack thereof) I decided not to bother writing tests, at least initially. In a sense, this felt “dirty”, but I still think it was the right call given what I was trying to do: Development There’s a kind of zen flow that programmers unblock when they use their software daily. I don’t mean just testing it but experiencing it as an end user. There’s no better catalyst for ideas and experimentation, no better prioritization driver than having to face the bugs, annoyances, and limitations of an application first-hand. After some trial and error with different UI layouts and features, a usage pattern emerged: open the app, scroll down the main feed, pin to read later, open to read now, bookmark for future reference. I decided early on that I wanted the option to read articles without leaving the app (among other things, to avoid paywalls and consent popups). I tried several Python libraries to extract HTML content, but none worked as well as the readability one used by Firefox. Since it’s a JavaScript package, I had to resign myself to introducing an optional dependency on Node.js. With the basic functionality in place, a problem became apparent. Even after curating the list of feeds and carefully distributing them in folders, it was hard to get interesting content by just scrolling items sorted by publication date: occasional blog posts would get buried behind Mastodon toots, magazine features behind daily news articles. I needed to make the sorting “smarter”. Considering that I only followed sources I was interested in, it was safe to assume that I’d want to see content from the least frequent ones first. If a monthly newsletter came out in the last couple of days, that should show up at the top, before any micro-blogging or daily news items. So I classified sources into “frequency buckets” and sorted the feed to show the least frequent buckets first. Finally, to avoid this “infrequent content” sticking at the top every time I opened the app, I added a feature that automatically marks entries as “already seen” as I scroll down the feed. This way I always get fresh content and never miss “rare” updates. ∗ ∗ ∗ At first, I left the app running on a terminal tab on my laptop and used it while I worked on it. Once I noticed that I liked what was showing up in the feed, I set up a Raspberry Pi server in my local network to have it available all the time. This, in turn, encouraged me to improve the mobile rendering of the interface, so I could access it from my phone. I eventually reached a point where I missed using the app when I was out, so I decided to deploy it to a VPS. This forced me to finally add the authentication and multi-user support I’d been postponing and allowed me to give access to a few friends for beta testing. (The VPS setup also encouraged me to buy a domain and set up this website, getting me closer to the IndieWeb ideal that inspired me in the first place). Conclusion It took me about three months of (relaxed) work to put together my personal feed reader, which I named feedi. I can say that I succeeded in reengaging with software development, and in building something that I like to use myself, every day. Far from a finished product, the project feels more like my Emacs editor config: a perpetually half-broken tool that can nevertheless become second nature, hard to justify from a productivity standpoint but fulfilling because it was built on my own terms. I’ve been using feedi as my “front page of the internet” for a few months now. Beyond convenience, by using a personal reader I’m back in control of the information I consume, actively on the lookout for interesting blogs and magazines, better positioned for discovery and even surprise. Dagster, dbt, duckdb as new local MDS GEORG HEILER Introduction In today’s fast-paced data landscape, a streamlined and robust data stack is paramount for any organization’s infrastructure. This article explores an integrated setup utilizing DuckDB, Dagster, and dbt - three critical tools in modern data engineering. Following the newest development in the data ecosystem, we argue that we can rethink the current state of data transformation pipelines and gain various benefits such as reduced PaaS costs, increased development experience and overall implementation quality. Components This data stack comprises the following elements: Architecture Dagster is the core of this architecture, managing the execution environment. As long as your use case can be expressed in Python, Dagster’s capabilities are limitless. Dagster also seamlessly integrates with other (open-source) tools or APIs, facilitating data ingestion with airbyte and transformations with dbt. The dagster-dbt integration provides a robust method to manage and monitor dbt-driven data transformations. Nevertheless, dbt combined with Duckdb is the second most important part of our architecture, responsible for data transformation. It is important to recognize the independence between Dagster and dbt-duckdb. In the blog we will cover: Cloud architectures benefit immensely from compute-storage segregation, bolstered by object storage cost-efficiency. Our proposed stack scales from a solo developer’s machine to cloud-based Kubernetes clusters. In particular, developer productivity and implementation quality are enhanced as each of the components is built with the best software engineering practices in mind. This is different from the currently used PaaS data platforms, which are often closed-source and proprietary. We show a way how you can combine the best of both worlds for developer productivity with software engineering best practices by using the new local modern data stack and how this can be combined with PaaS platforms for a great data consumer experience. Data layers The rise of “Big RAM” and multicore utilization has started to eclipse “Big Data”, leading to the popularity of simple, scalable data processing engines like DuckDB, DataFusion, and Polars. The dbt extension for DuckDB facilitates the implementation of complex pipelines and integration through a flexible plugin ecosystem. The accompanying diagram provides a blueprint for engineers to navigate data transformation, emphasizing crucial decisions within each layer. The diagram is a representation of a dbt project divided into three layers from left to right: source, transformations and serving layer. Source layer: The starting point of data pipelines encompassing local and remote data sources. It is defined as source definition. Transformation layer: This stage sees data transformation within a DuckDB file, allowing the DuckDB engine to utilize the available resources and fully deliver optimal performance. It is defined as the dbt models. Serving layer: The final layer where data becomes accessible to the data consumers. The data can be exported in various formats or transferred to a database server. It is defined as the dbt models with an external materialization. Data partitioning challenge Data naturally accumulates over time, presenting a unique set of challenges. In reality, data typically arrives in partitions (e.g., daily, monthly, or every five minutes). If partitions are processed independently, the need for complex “Big Data” tooling diminishes for most use cases. Despite improvements in DuckDB’s out-of-core processing, memory constraints of a single machine persist. We must adopt strategies to reduce the memory footprint, such as: Columnar storage and data compression: The duckdb file format is highly optimized for analytics, leveraging data pruning and late materialization techniques. Processing subsets of data: To avoid redundant transformations, it’s essential to process subsets of data incrementally and idempotently. Dagster Dagster as a platform backbone Dagster should be treated as a primary component due to its: Dagster’s built-in features support these orchestration needs, enabling rapid development and integration of new data assets within your data platform. We would recommend taking a look into great reference projects from the Dagster team: Dagster as partition manager Thinking of the data partitioning challenge, Dagster’s partitioning feature enables efficient processing of data subsets. Dagster supplies partition parameters to dbt run command and offers a comprehensive view of partitioned jobs and facilitates easy backfilling - even for dependent data assets. This is even the case for nested partitions or dynamically created ones. Dagster and dbt The integration between Dagster and dbt is seamless, offering an orchestration layer and a metadata context for dbt projects. This integration shines when using dbt’s project variables for data partitioning. Although the dbt project can operate standalone, Dagster’s orchestration complements dbt’s terminal-based execution. Developers can efficiently run and repeat multiple pipelines with different partitions, simulating the same behavior as in the production environment. Small and very large data volumes can easily be processed by applying this pattern of a good partitioning strategy. dbt and DuckDB DuckDB’s concurrency model DuckDB breaks from the server-based concurrency model, offering an in-process OLAP system that slashes latency. The current era of ample RAM and advancements in single-machine performance make DuckDB’s approach increasingly relevant for most data use cases. DuckDB’s approach to concurrency—supporting a single writer or multiple readers—is detailed here. dbt + DuckDB execution model Dbt is a transformation juggernaut in data engineering, traditionally executing SQL queries on remote servers. Integrating with DuckDB, however, shifts the paradigm, placing the transformation process directly in the dbt process rather than a distant server. This fundamental change in the runtime impacts the development and data engineering workflow significantly. We will tackle its impact and explain more in the following development section. The integration of dbt with DuckDB presents many benefits from both tools: Development The power of local development Software engineering has used the local environment for decades for various good reasons: Cost-effective Capitalized computers are cheap for companies and are there to be used. The cloud introduces development costs as operative expenses due to the consumption of rented computing time, storage, or services. Fast feedback loop Running programs locally makes development much faster, and the feedback loop is shorter. Do you remember the last time you were waiting for one of the PaaS providers to spin up some compute nodes? Often this takes 5-10 minutes. Testing environment The developer can quickly test changes locally before contributing the code to the shared repositories. Isolated and self-contained environment The environment can be containerized and, therefore standardized for each stage or developer. However, different trends and new developments in the data ecosystem have changed how data pipelines are created over time. We will tackle the history of those trends and explain why the modern data stack allows us to set up the local environment again. Before dbt IT projects handling data have existed a long time before dbt was created. Usually, a monolithic appliance database (Oracle, SQL Server) was where developers executed their code and ran the data pipelines. Local deployments and testing were easy because each database had a light version that could run locally on the developer’s machine, enjoying all the benefits of local productivity. However, other problems, such as observability, lineage and version control for data assets were paramount. These could be solved later with the introduction of dbt. What changed over time? Separation of compute and storage The development of the Hadoop ecosystem and later Spark and cheap remote storage popularised scaling compute and storage separately to best fit individual needs. As a result, teams can quickly spin up new compute instances query and transform the data on the remote storage. Remote storage and open table formats Advances in analytical file formats and the proliferation of cloud storage solutions, such as S3 and similar services, have revolutionized data storage. Open table formats like Delta and Iceberg have risen, logically grouping files into a single table and pushing table metadata toward remote storage. Delta’s whitepaper and T-Mobile’s blog: Why migrate to a Data Lakehouse to Delta Lake elucidate the shift toward remote storage and open table formats. The industry trend is evident with Microsoft Fabric’s adoption of Delta and the support of modern query engines for Iceberg. PaaS data platform Modern data platforms are usually fully featured distributed environments that make a living of providing one uniform data platform that automates complex infrastructure management. They bring a lot of good out-of-the-box features like RBAC, notebook experience, and integration of visualization tools. But at the same time add an unnecessary overhead for the data transformation. Current dbt development practice Platforms like Snowflake and Databricks have shifted development from local environments to a cloud-based model. As a result, the need emerged to develop a transformation framework (dbt) that allows for remote execution, providing the best software engineering practices. The following diagram illustrates the current dbt development process. Imagine that we have the data in the remote storage with names and salaries. This data can be either ingested as the managed table or we can create an external table definition. Either way, the table is defined as the source definition in the source.yaml file depicted with the orange color. The developer’s task is to change the calculation for the new salary. Each developer has their own unique environment. It is their own branch and implementation for the dbt project depicted in an individual color per developer. The model1.sql contains the current changes on the respective branch and model1 execution represents the code that would be sent and executed on the transformation engine. Note that the execution code contains SQL code change from the current branch and the developer-specific schema which is defined in the profiles.yaml file PaaS problems for data transformation pipelines Normally, the server-side component of the PaaS data platform (their big data engine and custom extensions) are closed-source. Therefore, a local (possibly neatly containerized) replication and simulation of the production environment is not possible. As a result, the code has to be tested and executed in a remote environment. Due to the forced remote development, the feedback loop is reasonably slower and introduces additional development costs. Developers often copy the code from a source package into a remote notebook for debugging purposes or have to deploy code to the platform to test an E2E data pipeline with the orchestrator. Changes towards new development Computer improvements Over time, computers got better in each aspect. Nowadays, computers can efficiently process gigabytes of data and run multiple docker images at a time. In memory OLAP systems The incredible development of the in-memory OLAP system in the last few years and the introduction of the Apache Arrow format changed the data landscape and interoperability between different engines. This doesn’t mean that we can run everything at once. But with smart partitioning and processing a chunk at a time, we can reasonably quickly achieve the desired data transformation Open file format library independent of big data engines Generally, open table formats such as delta or iceberg were developed to serve the big data engines, such as spark or flink. Parts of their codebase are closely coupled with their execution engines. Because of that, developers would be forced to use spark and run JVM to interact with the open table formats. This has changed recently: The open file format is a protocol and not a specific implementation. This means that you can implement an interface in a native library - often written in Rust - that can be used to interact with the open table format without the need for a clunky big data framework. The delta-rs serves as such an implementation, making Python bindings very easy to implement. It eliminates the need for the JVM or the strong coupling to a distributed execution engine. It was born with the need to read kafka topics and write the data directly to delta tables in S3. Employing that approach was making this process more cost-effective and reliable than a big data engine (Spark). Iceberg is offering a native implementation as well: iceberg-python. New “old” Dagster + dbt + DuckDB development practice The reasons above and the convergence of dbt and DuckDB transforms data processing and recalibrates the development practices for data. It brings software engineering best practices back to the domain of data processing and makes local development possible again - and easy. In this setup, our goal is to have: Uniform data source for developers Each developer should be able to pull the same data into the transformation process quickly. This means the dbt-duckdb has to be able to pull the data automatically from the remote storage. With dbt-duckdb, there are several great out-of-box possibilities to access remote data: Duckdb and remote files: DuckDB has a native way to access remote files with the httpfs or filesystem extension. You can define it easily in the dbt-duckdb source configuration. dbt-duckdb plugins: Plugins in the dbt-duckdb are an extension to the dbt adapter, which allows us to extend the reading and writing capabilities of the dbt-duckdb process. There is a plugin to read Excel files or open table formats, such as iceberg or delta. For instance, the delta plugin uses the delta-rs package, which enables reading directly from different cloud object stores. Using delta and dbt-duckdb is simple. You can find an example project with a sample configuration here. Further new plugins can be implemented easily. In case one is missing, feel free to open an issue and connect in the dbt Slack. Isolated local development The problem of working isolated on a database server was complicated. A solution was introduced with dbt - as we showed before in the visualization about the current state of data development. Being able to develop independently from others and knowing that changes do not interfere with each other is very important for developer productivity. One main idea of the new local stack is to introduce changes in the local environment. Once each developer has tested locally in their own environment changes move up to shared and closer to production environment. Streamlined staging and deployment processes A cookbook recipe to define an environment, including its state, is crucial for smooth transitions from development to production. This is the basis of ensuring that a development and production environment are the same and will not introduce problems due to different environments. With the same environment, you can test and ensure the same behavior across different deployment pipeline stages. We can easily containerize the whole orchestration and transformation pipeline. which means that the developer can easily ensure the exact runtime for the various development, test, and prod environments. The new better development practice Considering the above points, we suggest a potential implementation of the current state-of-the-art development guidelines for data transformation pipelines. The diagram shows each developer has a local containerized instance of the entire stack. The developer can start the process using the Dagster UI or the dbt CLI. The running process pulls the data from the source definition model and transforms it to the desired state. The source definition is defined in the source.yaml file. If the model is an external materialization, the process pushes the data to remote storage, further serving the data consumers. The model is defined in the model1.sql and the code which runs is illustrated in the model execution. When the transformation code lands in the main branch, the CI/CD agent can first run the tests and push the code to the production environment. We use the newly developed and experimental delta plugin in the diagram, but the concept is the same for all plugins. Cost reduction As the business evolves and moves from strategic reporting (refresh 1x/Day) towards near real-time products (15 minutes, 5 minutes, streaming) the PaaS products can get very expensive. Moving the transformation logic into a containerized environment can reduce operational costs which are introduced due to the PaaS platform and resolve the vendor lock-in problem. Synergy of the PaaS and modern data stack We don’t think that the modern data stack and the PaaS data platform are opponents. They can be the perfect mix if combined in the right ways. In particular, the PaaS should become an implementation detail and part of the data platform and not be the single one-stop-shop solution for everything. The modern data stack advantages: The advantages of PaaS systems: Challenges of DuckDB + dbt Debugging and analyzing production DuckDB files One critical area is establishing effective methods for debugging and analyzing DuckDB files in a production environment. Troubleshooting and understanding production data is vital for maintaining system reliability and performance. The problem is the absence of a built-in server that facilitates access control and handles concurrency for multiple users. This problem can be compensated with specific tooling, which we will discuss in the next section, but the overall duckdb concurrency problem still holds. We want to explore and discuss better solutions to this problem in the future. Missing RBAC We proposed above to use DuckDB in a stateless (transformation only) mode. There is not neccessarily a need for access control directly on the side of DuckDB. However, the adjacent layers of the stack then need implement these control measures: Object store: File-based permission management from the storage layer. Serving layer: The serving layer can implement fine-grained data access control means (row/column) masking and filtering. The usage patterns can also be divided into groups: Creators (Developers): The only way to control the access is on the source side with the Object store permissions. Data consumers: As outlined above, this problem can be the source of great synergy with the PaaS if used correctly. For example, we can push ready data to Microsoft Fabric or use a delta share server to control RBAC. Tooling Accessing remote DuckDB files and debugging is part of the developer workflow. The tooling is not as mature as traditional workflows but is improving daily. We have explored the following options: Serving layer The last layer in our transformation process is the serving layer. The dbt process exports the data to the storage; from there on, it is served and further integrated: Serving with an API server Three options for serving the data as an API server: Dagster integration Having Dagster as a core part of the pipeline, arbitrary connections and integrations can be easily added. For example: Using PaaS as a serving layer The PaaS platforms have outstanding data analytics integration and data-consumer interaction features. They also offer great BI integrations such as: By combining the best of both worlds developer productivity and data consumer experience can be great. Conclusion As the data ecosystem rapidly evolves, we must stay adaptive, seeking superior, simpler, and more cost-effective methodologies. Improved in-memory OLAP systems, faster remote storage, open file formats, and the new dbt-duckdb execution runtime allow us to introduce a new way of developing and executing data transformation pipelines supported by software engineering best practices. This blog juxtaposes the traditional development state with fresh opportunities afforded by technological advances, advocating for a balanced approach that leverages both the modern data stack and PaaS platforms. Interestingly, the need for simplification is not unique to the transformation layer: For the ingestion of data, recently Dagster embedded ELT proposed a complementrary model based on similar values. We are very excited about the data ecosystem’s future and how the new modern local stack evolves and supports that journey along the way. We are happy to discuss our blog with you, so dont hesitate to write us: Both authors thank the reviewers of the blog for their valuable feedback and input. A sample project demonstrating the interaction of dbt with DuckDB and external delta files is found here. The image was created using Firefly by Adobe. Parts of this text were adeptly generated by ChatGPT but enhanced by real humans.
-
Reclaiming the Web with a Personal Reader FACUNDO OLANO Background Last year I experienced the all-too-common career burnout. I had a couple of bad projects in a row, yes, but more generally I was disillusioned with the software industry. There seemed to be a disconnection between what I used to like about the job, what I was good at, and what the market wanted to buy from me. I did the usual thing: I slowed down, quit my job, started therapy. I revised my habits: eat better, exercise, meditate. I tried to stay away from programming and software-related reading for a while. Because I didn’t like the effect it had on me, but also encouraged by its apparent enshittification, I quit Twitter, the last social media outlet I was still plugged into. Not working was one thing, but overcoming the productivity mandate —the feeling that I had to make the best of my time off, that I was “recharging” to make a comeback— was another. As part of this detox period, I read How to Do Nothing, a sort of artistic manifesto disguised as a self-help book that deals with some of these issues. The author Jenny Odell mentions Mastodon when discussing alternative online communities. I had heard about Mastodon, I had seen some colleagues move over there, but never really looked at it. I thought I should give it a try. ∗ ∗ ∗ I noticed a few things after a while on Mastodon. First, it felt refreshing to be back in control of my feed, to receive strictly chronological updates instead of having an algorithmic middleman trying to sell me stuff. Second, many people were going through a similar process as mine, one of discomfort with the software industry and the Web. Some of them were looking back at the old times for inspiration: playing with RSS, Bulletin Board Systems, digital gardens, and webrings; some imagined a more open and independent Web for the future. Third, not only wasn’t I interested in micro-blogging myself, but I didn’t care for most of the updates from the people I was following. I realized that I had been using Twitter, and now Mastodon, as an information hub rather than a social network. I was following people just to get notified when they blogged on their websites; I was following bots to get content from link aggregators. Mastodon wasn’t the right tool for that job. Things clicked for me when I learned about the IndieWeb movement, particularly their notion of social readers. Trying Mastodon had been nice, but what I needed to reconnect with the good side of the Web was a feed reader, one I could adjust arbitrarily to my preferences. There are plenty of great RSS readers out there, and I did briefly try a few, but I knew this was the perfect excuse for me to get back in touch with software development: I was going to build my own personal reader. Goals As a user, I had some ideas of what I wanted from this project. Rather than the email inbox metaphor I’ve most commonly seen in RSS readers, I wanted something resembling the Twitter and Mastodon home feed. That is: instead of a backlog to clear every day, a stream of interesting content whenever I opened the app. The feed would include articles from blogs, magazines, news sites, and link aggregators, mixed with notifications from personal accounts (Mastodon, Goodreads, GitHub). The parsing should be customizable to ensure a consistent look and feel, independent of the shape of the data each source published. I wasn’t interested in implementing the “social” features of a fully-fledged indie reader. I didn’t mind opening another tab to comment, nor having my “content” scattered across third-party sites. That’s what I started with but, once I had the basic functionality in place, I planned to drive development by what felt right and what felt missing as a user. My short-term goal was to answer, as soon as possible, this question: could this eventually become my primary —even my sole— source of information on the Web? If not, I’d drop the project right away. Otherwise, I could move on to whatever was missing to realize that vision. ∗ ∗ ∗ As a developer, I wanted to test some of the ideas I’d been ruminating on for over a year. Although I hadn’t yet formulated it in those terms, I wanted to apply what I expressed in another post as: user > ops > dev. This meant that, when prioritizing tasks or making design trade-offs, I would choose ease of operation over development convenience, and I would put user experience above everything else. Since this was going to be an app for personal use, and I had no intention of turning it into anything else, putting the user first just meant dogfooding: putting my “user self” —my needs— first. Even if I eventually wanted other people to try the app, I presumed that I had a better chance of making something useful by designing it ergonomically for me, than by trying to satisfy some ideal user. It was very important to me that this didn’t turn into a learning project or, worse, a portfolio project. This wasn’t about productivity: it was about reconnecting with the joy of software development; the pleasure wouldn’t come from building something but from using something I had built. Assuming myself as the single target audience meant that I could postpone whatever I didn’t need right away (e.g. user authentication), that I could tackle overly-specific features early on (e.g. send to Kindle), that I could assume programming knowledge for some scenarios (e.g. feed parser customization, Mastodon login), etc. Design Given that mental framework, I needed to make some initial technical decisions. User Interface Although this was going to be a personal tool, and I wanted it to work on a local-first setup, I knew that if it worked well I’d want to access it from my phone, in addition to my laptop. This meant that it needed to be a Web application: I wanted the Web UI to be somewhat dynamic, but I didn’t intend to build a separate front-end application, learn a new front-end framework, or re-invent what the browser already provided. Following the boring tech and radical simplicity advice, I looked for server-side rendering libraries. I ended up using a mix of htmx and its companion hyperscript, which felt like picking Web development up where I’d left off over a decade ago. Architecture Making the app ops-friendly meant not only that I wanted it to be easy to deploy, but easy to set up locally, with minimal infrastructure —not assuming Docker, Nix, etc. A “proper” IndieWeb reader, at least as described by Aaron Parecki, needs to be separated into components, each implementing a different protocol (Micropub, Microsub, Webmentions, etc.). This setup enforces a separation of concerns between content fetching, parsing, displaying, and publishing. I felt that, in my case, such architecture would complicate development and operations without buying me much as a user. Since I was doing all the development myself, I preferred to build a monolithic Web application. I chose SQLite for the database, which meant one less component to install and configure. In addition to the Web server, I needed some way to periodically poll the feeds for content. The simplest option would have been a cron job, but that seemed inconvenient, at least for the local setup. I had used task runners like Celery in the past, but that required adding a couple of extra components: a consumer process to run alongside the app and something like Redis to act as a broker. Could I get away with running background tasks in the same process as the application? That largely depended on the runtime of the language. Programming language At least from my superficial understanding of it, Go seemed like the best fit for this project: a simple, general-purpose language, garbage-collected but fast enough, with a solid concurrency model and, most importantly for my requirements, one that produced easy-to-deploy binaries. (I later read a similar case for Golang from the Miniflux author). The big problem was that I’d never written a line of Go, and while I understood it’s a fairly accessible language to pick up, I didn’t want to lose focus by turning this into a learning project. Among the languages I was already fluent in, I needed to choose the one I expected to be most productive with, the one that let me build a prototype to decide whether this project was worth pursuing. So I chose Python. The bad side of using Python was that I had to deal with its environment and dependency quirks, particularly its reliance on the host OS libraries. Additionally, it meant I’d have to get creative if I wanted to avoid extra components for the periodic tasks. (After some research I ended up choosing gevent and an extension of the Huey library to run them inside the application process). The good side was that I got to use great Python libraries for HTTP, feed parsing, and scraping. Testing (or lack thereof) I decided not to bother writing tests, at least initially. In a sense, this felt “dirty”, but I still think it was the right call given what I was trying to do: Development There’s a kind of zen flow that programmers unblock when they use their software daily. I don’t mean just testing it but experiencing it as an end user. There’s no better catalyst for ideas and experimentation, no better prioritization driver than having to face the bugs, annoyances, and limitations of an application first-hand. After some trial and error with different UI layouts and features, a usage pattern emerged: open the app, scroll down the main feed, pin to read later, open to read now, bookmark for future reference. I decided early on that I wanted the option to read articles without leaving the app (among other things, to avoid paywalls and consent popups). I tried several Python libraries to extract HTML content, but none worked as well as the readability one used by Firefox. Since it’s a JavaScript package, I had to resign myself to introducing an optional dependency on Node.js. With the basic functionality in place, a problem became apparent. Even after curating the list of feeds and carefully distributing them in folders, it was hard to get interesting content by just scrolling items sorted by publication date: occasional blog posts would get buried behind Mastodon toots, magazine features behind daily news articles. I needed to make the sorting “smarter”. Considering that I only followed sources I was interested in, it was safe to assume that I’d want to see content from the least frequent ones first. If a monthly newsletter came out in the last couple of days, that should show up at the top, before any micro-blogging or daily news items. So I classified sources into “frequency buckets” and sorted the feed to show the least frequent buckets first. Finally, to avoid this “infrequent content” sticking at the top every time I opened the app, I added a feature that automatically marks entries as “already seen” as I scroll down the feed. This way I always get fresh content and never miss “rare” updates. ∗ ∗ ∗ At first, I left the app running on a terminal tab on my laptop and used it while I worked on it. Once I noticed that I liked what was showing up in the feed, I set up a Raspberry Pi server in my local network to have it available all the time. This, in turn, encouraged me to improve the mobile rendering of the interface, so I could access it from my phone. I eventually reached a point where I missed using the app when I was out, so I decided to deploy it to a VPS. This forced me to finally add the authentication and multi-user support I’d been postponing and allowed me to give access to a few friends for beta testing. (The VPS setup also encouraged me to buy a domain and set up this website, getting me closer to the IndieWeb ideal that inspired me in the first place). Conclusion It took me about three months of (relaxed) work to put together my personal feed reader, which I named feedi. I can say that I succeeded in reengaging with software development, and in building something that I like to use myself, every day. Far from a finished product, the project feels more like my Emacs editor config: a perpetually half-broken tool that can nevertheless become second nature, hard to justify from a productivity standpoint but fulfilling because it was built on my own terms. I’ve been using feedi as my “front page of the internet” for a few months now. Beyond convenience, by using a personal reader I’m back in control of the information I consume, actively on the lookout for interesting blogs and magazines, better positioned for discovery and even surprise.
-
Visualizing fighting game mechanics Overview: Tekken is a Japanese fighting game that has been around since the late 80’s. It continues to boast a strong community of players and remains competitive with other leading titles on the market. If you’ve never heard of Tekken before, the video below will give you an idea of what it’s like: Problem being solved: Many players learning the game have a hard time understanding how to build tactics and develop a gameplan. Many existing guides online teach players the technical aspects of the game, but fail to provide enough context of how certain moves work in gameplay. Fighting moves are usually taught in isolation and not relative to each other. This project aims to help players compare different moves to better understand them. Target audience: Beginners who have some familiarity of Tekken and have already chosen a character to specialize in. Key objective: To design a physical visual aid for players to use while learning Tekken. Players are equipped to learn one character and develop a better understanding on how to think about fighting games.
-
BSC presents Sargantana, the new generation of the first open-source chips designed in Spain BSC presents Sargantana, the new generation of the first open-source chips designed in Spain The Barcelona Supercomputing Center - Centro Nacional de Supercomputación (BSC-CNS) presented on Wednesday the new Sargantana chip, the third generation of open source processors designed entirely at the BSC. The development of Sargantana is a crucial step forward in reinforcing BSC's leading position in RISC-V open source computing technology research in Europe. Sargantana (the name of the lizard in Aragonese and Catalan) is the third generation of the Lagarto processors, the first open source chips developed in Spain, in the framework of the DRAC project (Designing RISC-V-based Accelerators for next generation Computers), and is one of the most advanced open source chips in Europe at the academic level. The new Sargantana features better performance than its two predecessors - Lagarto Hun (2019) and DVINO (2021) - and is the first processor in the Lagarto family to break the gigahertz barrier in operating frequency. The fact that the instruction set architecture (ISA) of these new processors is open source, and therefore non-proprietary and accessible to all, reduces technological dependence on large multinational corporations by enabling innovation through the collaboration of companies and institutions without the limitations of proprietary architectures. The RISC-V free hardware architecture, on which these new chips are based, could bring about a technological revolution in the hardware world like Linux did in the software world. "The launch of Sargantana is a further step forward in the development of European RISC-V based technology, an embryo of the future European high-performance processor. This open hardware will be vital to ensure technological sovereignty and maintain European industrial competitiveness, and consolidates the BSC's role as a pioneer in Europe in the introduction of open source for chip design," said BSC director Mateo Valero. In 2017, the European Union identified the lack of own hardware as one of the main vulnerabilities, due to the risk of industrial espionage posed by an over-reliance on chips designed and produced outside Europe, especially in the United States, Taiwan, China, Japan and South Korea. The BSC was then tasked by the EU to lead the scientific development of future European chips to provide the market with an open and local alternative, suitable for high-performance computing, artificial intelligence, the automotive sector and the internet of things. Joint work coordinated by the BSC Researchers from other universities and research centres such as the Centro de Investigación en Computación del Instituto Politécnico Nacional de México (CIC-IPN), the Centro Nacional de Microelectrónica (CNM-CSIC), the Universitat Politècnica de Catalunya (UPC), the Universitat Autònoma de Barcelona (UAB), the Universitat de Barcelona (UB) and the Universitat Rovira i Virgili (URV) have participated in the development of Sargantana. The project was coordinated by BSC researcher Miquel Moretó, who highlighted the advantages of open-source semiconductor design to enable collaboration between companies and academic institutions around the world. "The new Sargantana chip is freely available to all, enabling a new era of processor innovation through open collaboration in which anyone, anywhere can benefit from RISC-V technology," he said. Moretó pointed out that Sargantana is an experimental chip, a research prototype that will allow us to test applications with RISC-V technology and deepen our knowledge, but it is not yet designed to be used in computers or other devices. "We are developing a technology that will allow Spain and Europe to design their own increasingly competitive processors in the future, in addition to training future professionals in a sector that will undoubtedly add great value to the production chain," added Moretó, who highlighted the joint efforts of Catalonia, Spain and Europe to have European technology made in Barcelona and to train engineers in this field. European leadership in RISC-V This objective is in line with the idea pointed out by Mateo Valero of making Barcelona an international benchmark in processor design. "We have the talent, the technological knowledge and the scientific environment necessary for Barcelona and its surroundings to be able to compete with any institution or region in the world and become a Design Valley that drives the creation of companies and new jobs", said Valero. The Sargantana project has received funding from the Strategic Transformation Project for the semiconductor sector, known as PERTE Chip, the largest investment of all industrial transformation projects approved by the Spanish Government, as well as European funds from the ERDF Operational Programme of Catalonia 2014-2020, with the support of the Catalan Government. The Sargantana presentation ceremony was held at the BSC facilities in Barcelona as part of the first day of the Spanish Open Hardware Alliance (SOHA), an association that brings together Spanish universities and research centres with the aim of promoting research in the area of Open Computer Technology and Architecture, thus contributing to the training of talent that allows the creation of high quality jobs.
-
Exploit Title: Powered By Assamlook.com - Sql Injection
bonummaster replied to mkdirlove's topic in Pwned
-
FREECALL ANYCOUNTRY USING VOIP CALL PHILIPPINES HOW? 1.FIRST INSTALL ITEL MOBILE DIALER PLAYSTORE SEARCH ITEL MOBILE DIALER 2.NEXT INPUT OPERATOR CODE: OP CODE:57213 NEXT 3.INPUT USERNUMBER AND PASSWORD ACCESS BONUMTELECOM USERNUMBER PASSWORD 092600367491 245977 092601024541 709811 092602381074 338006 092603072159 640040 092604564227 589296 092604910198 238288 092605068037 937383 092606890811 400466 092607209542 804946 092608695339 434826 092608835306 868828 092609096486 545254 092610937359 348988 092616094338 275307 092617031022 632209 092617292205 798573 092617341185 495126 092617639362 925059 092619227791 406238 092620178924 655499 092620220673 591185 092621608647 371408 092621672693 590752 092622671926 871241 092622959402 106613 092623240952 369582 092623552925 638607 092623724577 192429 092624042091 316384 092627055417 881110 092629625838 221106 092631235863 271246 092631468681 443559 092632594210 278043 092632984873 689658 092633588521 358264 092633721748 338007 092634673741 967481 092635114905 152373 092635456217 196445 092635835736 125924 092638327429 625837 092638633076 273950 092639497635 226089 092640252455 151753 092640682875 884125 092642547020 926449 092642856034 112060 092642896045 764275 092645335321 187547 092645971999 267268 092647857512 530747 092648235321 996992 092649248966 502789 092649607759 731013 092649714564 931083 092649875032 786812 092651953186 923812 092652320131 949208 092652599622 486115 092652688472 775021 092652974483 247120 092654833208 105341 092655074333 443908 092655766076 144375 092656480817 159581 092658291136 282508 092660137342 847436 092660642289 161613 092661659742 149783 092662230997 364588 092663100136 143888 092663386817 919882 092663802098 346835 092665022039 924723 092665124743 754309 092667493370 805041 092669806722 256941 092672559997 482630 092673337017 324206 092673380600 374722 092673652723 981943 092673782010 389088 092676452278 804434 092680363116 632495 092680587110 972551 092684758811 253562 092685888002 829025 092687295636 253473 092688189819 265633 092689580381 450735 092690182416 769235 092691686492 939495 092692449636 944585 092692738757 297282 092694598379 529832 092695701756 740933 092697311749 310801 092697698130 869243 092699542571 829013 SAVE. NOW TEST ACTIVE USERNUNBER VOIP TO VOIP CALL NO NEED TOKEN NO NEED LOAD -GOLD VOICE -HIGHQUALITY PREMIUM CALL ANY COUNTRY PROJECT BY BONUMMASTER TO ALL OFW AND PINOY SUPPORT! VOIP PH BONUMTELECOM ENJOY!
-
ıllıllı #Innocent__Wizard ıllıllı 𝘼𝙇𝙇-𝙄𝙉-𝙊𝙉𝙀 𝘼𝙒𝙀𝙎𝙊𝙈𝙀 𝘾𝙔𝘽𝙀𝙍𝙎𝙀𝘾 𝙍𝙀𝙎𝙊𝙐𝙍𝘾𝙀𝙎 🗿 (All open source resources) •Awesome Red Team Ops :- https://github.com/CyberSecurityUP/Awesome-Red-Team-Operations •Awesome Red Teaming :- https://github.com/yeyintminthuhtut/Awesome-Red-Teaming •Awesome Red Team ToolKit :- https://0x1.gitlab.io/pentesting/Red-Teaming-Toolkit/ •Awesome Blue Team Ops :- https://github.com/fabacab/awesome-cybersecurity-blueteam •Awesome OSINT :- https://github.com/jivoi/awesome-osint •Awesome DevSecOps :- https://github.com/devsecops/awesome-devsecop •Awesome Pentest :- https://github.com/enaqx/awesome-pentest •Awesome Cloud Pentest :- https://github.com/CyberSecurityUP/Awesome-Cloud-PenTest •Awesome Shodan :- https://github.com/jakejarvis/awesome-shodan-queries •Awesome AWS Security :- https://github.com/jassics/awesome-aws-security •Awesome Malware Analysis & Reverse Engineering :- https://github.com/CyberSecurityUP/Awesome-Malware-Analysis-Reverse-Engineering •Awesome Malware Analysis:- https://github.com/rshipp/awesome-malware-analysis •Awesome Computer Forensic :- https://github.com/cugu/awesome-forensics •Awesome Cloud Security :- https://github.com/4ndersonLin/awesome-cloud-security •Awesome Reverse Engineering :- https://github.com/tylerha97/awesome-reversing •Awesome Threat Intelligence :- https://github.com/hslatman/awesome-threat-intelligence •Awesome SOC :- https://github.com/cyb3rxp/awesome-soc •Awesome Social Engineering :- https://github.com/v2-dev/awesome-social-engineering •Awesome Web Security :- https://github.com/qazbnm456/awesome-web-security#prototype-pollution •Awesome Forensics :- https://github.com/cugu/awesome-forensics •Awesome API Security :- https://github.com/arainho/awesome-api-security •Awesome WEB3 :- https://github.com/Anugrahsr/Awesome-web3-Security •Awesome Incident Response :- https://github.com/Correia-jpv/fucking-awesome-incident-response •Awesome Search Engines :- https://github.com/edoardottt/awesome-hacker-search-engines •Awesome Smart Contract Security:- https://github.com/saeidshirazi/Awesome-Smart-Contract-Security •Awesome Terraform :- https://github.com/shuaibiyy/awesome-terraform •Awesome Cloud Pentest :- https://github.com/CyberSecurityUP/Awesome-Cloud-PenTest •Awesome Burpsuite Extensions :- https://github.com/snoopysecurity/awesome-burp-extensions •Awesome IOT :- https://github.com/phodal/awesome-iot/blob/master/README.md •Awesome IOS Security :- https://github.com/Cy-clon3/awesome-ios-security •Awesome Embedded & IOT Security :- https://github.com/fkie-cad/awesome-embedded-and-iot-security •Awesome OSINT Bots :- https://github.com/ItIsMeCall911/Awesome-Telegram-OSINT#-bots •Awesome IOT Hacks :- https://github.com/nebgnahz/awesome-iot-hacks •Awesome WEB3 Security:- https://github.com/Anugrahsr/Awesome-web3-Security •Awesome Security :- https://github.com/sbilly/awesome-security •Awesome Reversing :- https://github.com/tylerha97/awesome-reversing •Awesome Piracy :- https://github.com/Igglybuff/awesome-piracy •Awesome Web Hacking :- https://github.com/infoslack/awesome-web-hacking •Awesome Memory Forensics :- https://github.com/digitalisx/awesome-memory-forensics •Awesome OSCP :- https://github.com/0x4D31/awesome-oscp •Awesome RAT :- https://github.com/alphaSeclab/awesome-rat 🌟 𝐉𝐎𝐈𝐍 𝐅𝐎𝐑 𝐌𝐎𝐑𝐄 🌟 BONUNTEAM ♡ ㅤ ❍ㅤ ⌲ ˡᶦᵏᵉ ʳᵉᵃᶜᵗ ˢʰᵃʳᵉ
-
Cybersecurity Tools IP & URL Reputation: 1. VirusTotal - [link](https://www.virustotal.com/) 2. IBM X-Force Exchange - [link](https://exchange.xforce.ibmcloud.com/) 3. URLVoid - [link](https://www.urlvoid.com/) 4. IPVoid - [link](https://www.ipvoid.com/) 5. AlienVault OTX - [link](https://otx.alienvault.com/) 6. Cisco Talos Intelligence - [link](https://www.talosintelligence.com/) 7. IPinfo - [link](https://ipinfo.io/) 8. AbuseIPDB - [link](https://www.abuseipdb.com/) File | Hash | Search | Analysis | Sandboxing: 1. FileSec - [link](https://filesec.io/) 2. Hybrid Analysis - [link](https://www.hybrid-analysis.com/) 3. VxStream Sandbox - [link](https://www.vxstream-sandbox.com/) 4. Cuckoo Sandbox - [link](https://cuckoosandbox.org/) 5. Joe Sandbox - [link](https://www.joesecurity.org/) 6. ThreatConnect - [link](https://www.threatconnect.com/) 7. ThreatMiner - [link](https://www.threatminer.org/) 8. VirusBay - [link](https://beta.virusbay.io/) 9. Any.Run - [link](https://any.run/) 10. VirusShare - [link](https://virusshare.com/) 11. Jotti's Malware Scan - [link](https://virusscan.jotti.org/) 12. PDF Examiner - [link](https://pdfexaminer.com/) 13. PEStudio - [link](https://www.winitor.com/) 14. Malwr - [link](https://malwr.com/) 15. VirusTotal Graph - [link](https://www.virustotal.com/gui/graphs) Network Monitoring & Intrusion Detection: 1. Wireshark - [link](https://www.wireshark.org/) 2. Snort - [link](https://www.snort.org/) 3. Suricata - [link](https://suricata-ids.org/) 4. Security Onion - [link](https://securityonion.net/) 5. Bro (Zeek) - [link](https://zeek.org/) 6. Moloch - [link](https://molo.ch/) 7. Sguil - [link](https://bammv.github.io/sguil/index.html/) 8. NetworkMiner - [link](https://www.netresec.com/?page=NetworkMiner/) Vulnerability Scanning: 1. Nessus - [link](https://www.tenable.com/products/nessus/) 2. OpenVAS - [link](https://www.openvas.org/) 3. Qualys - [link](https://www.qualys.com/) 4. Acunetix - [link](https://www.acunetix.com/) 5. Nexpose - [link](https://www.rapid7.com/products/nexpose/) 6. Nikto - [link](https://cirt.net/Nikto2/) 7. Retina - [link](https://www.beyondtrust.com/products/retina/) 8. Lynis - [link](https://cisofy.com/lynis/) Firewall & Network Security: 1. pfSense - [link](https://www.pfsense.org/) 2. Suricata - [link](https://suricata-ids.org/) 3. FirewallD - [link](https://firewalld.org/) 4. Smoothwall - [link](https://www.smoothwall.com/) 5. OPNsense - [link](https://opnsense.org/) 6. IPFire - [link](https://www.ipfire.org/) 7. Endian Firewall - [link](https://www.endian.com/) 8. Untangle NG Firewall - [link](https://www.untangle.com/) Password Management: 1. LastPass - [link](https://www.lastpass.com/) 2. KeePass - [link](https://keepass.info/) 3. Dashlane - [link](https://www.dashlane.com/) 4. 1Password - [link](https://1password.com/) 5. Bitwarden - [link](https://bitwarden.com/) 6. RoboForm - [link](https://www.roboform.com/) 7. Enpass - [link](https://www.enpass.io/) 8. Keeper Security - [link](https://www.keepersecurity.com/) Endpoint Security: 1. Malwarebytes - [link](https://www.malwarebytes.com/) 2. Bitdefender - [link](https://www.bitdefender.com/) 3. Sophos - [link](https://www.sophos.com/) 4. McAfee - [link](https://www.mcafee.com/) 5. Kaspersky Endpoint Security - [link](https://www.kaspersky.com/) 6. Symantec Endpoint Protection - [link](https://www.symantec.com/) 7. ESET Endpoint Security - [link](https://www.eset.com/) 8. CrowdStrike Falcon - [link](https://www.crowdstrike.com/) 9. Trend Micro Apex One - [link](https://www.trendmicro.com/) 10. Carbon Black - [link](https://www.carbonblack.com/) 11. Cylance - [link](https://www.cylance.com/) 12. SentinelOne - [link](https://www.sentinelone.com/) 13. Palo Alto Networks Traps - [link](https://www.paloaltonetworks.com/products/endpoint-security/traps/) 14. Webroot - [link](https://www.webroot.com/) 15. Microsoft Defender Antivirus - [link](https://www.microsoft.com/en-us/windows/security/defender-antivirus/)