Tuesday, 11 September 2018

FREE AND UNLIMITED FAST SPEED WITH -VPN HUB |TechTalksGroup|



VPN HUB - Free and unlimited fast speed on your mobile

UNBLOCK the Internet and Browse Securely with VPN HUB for Android. Get it Free on the Google Play Store.

                                                  LINK

                     https://www.vpnhub.com/

So that's it. Hope you guys like it. If yes then please .. comment down below and do not forget to like follow and share our social media platforms. 

Facebook Page:- https://www.facebook.com/theprogrammer.harshit/
Google Plus:https://plus.google.com/u/0/communiti…/117296242526461886479
Blog:- https://www.techtalksgroup.blogspot.com
Instagram:- https://www.instagram.com/theprogrammer.har

Layers of OSI Model Explained .....|| tech talks group ||



The Open Systems Interconnection (OSI) model defines a networking framework to implement protocols in layers, with control passed from one layer to the next. It is primarily used today as a teaching tool. It conceptually divides computer network architecture into 7 layers in a logical progression. The lower layers deal with electrical signals, chunks of binary data, and routing of these data across networks. Higher levels cover network requests and responses, representation of data, and network protocols as seen from a user's point of view.

The OSI model was originally conceived as a standard architecture for building network systems and indeed, many popular network technologies today reflect the layered design of OSI.

1. Physical Layer 

At Layer 1, the Physical layer of the OSI model is responsible for ultimate transmission of digital data bits from the Physical layer of the sending (source) device over network communications media to the Physical layer of the receiving (destination) device. Examples of Layer 1 technologies include Ethernet cables and Token Ring networks. Additionally, hubs and other repeaters are standard network devices that function at the Physical layer, as are cable connectors.

At the Physical layer, data are transmitted using the type of signaling supported by the physical medium: electric voltages, radio frequencies, or pulses of infrared or ordinary light.

2.  Data Link Layer

When obtaining data from the Physical layer, the Data Link layer checks for physical transmission errors and packages bits into data "frames". The Data Link layer also manages physical addressing schemes such as MAC addresses for Ethernet networks, controlling access of any various network devices to the physical medium. Because the Data Link layer is the single most complex layer in the OSI model, it is often divided into two parts, the "Media Access Control" sublayer and the "Logical Link Control" sublayer.

3.  Network Layer

The Network layer adds the concept of routing above the Data Link layer. When data arrives at the Network layer, the source and destination addresses contained inside each frame are examined to determine if the data has reached its final destination. If the data has reached the final destination, this Layer 3 formats the data into packets delivered up to the Transport layer. Otherwise, the Network layer updates the destination address and pushes the frame back down to the lower layers.

4.  Transport Layer

The Transport Layer delivers data across network connections. TCP is the most common example of a Transport Layer 4 network protocol. Different transport protocols may support a range of optional capabilities including error recovery, flow control, and support for re-transmission.

5.  Session Layer

The Session Layer manages the sequence and flow of events that initiate and tear down network connections. At Layer 5, it is built to support multiple types of connections that can be created dynamically and run over individual networks.

6.  Presentation Layer

The Presentation layer is the simplest in function of any piece of the OSI model. At Layer 6, it handles syntax processing of message data such as format conversions and encryption / decryption needed to support the Application layer above it.

7.  Application Layer

The Application layer supplies network services to end-user applications. Network services are typically protocols that work with user's data. For example, in a Web browser application, the Application layer protocol HTTP packages the data needed to send and receive Web page content. This Layer 7 provides data to (and obtains data from) the Presentation layer.


So thats it. Hope you guys like it. If yes then please .. comment down below and do not forgot to like follow and share our social media platforms. 

Facebook Page:- https://www.facebook.com/theprogrammer.harshit/

Monday, 10 September 2018

UK’s Critical Infrastructure Vulnerable To DDoS Attacks ||tech talks group||


According to data revealed under the Freedom of Information Act by Corero Network Security, over one-third of critical infrastructure organizations in the UK are vulnerable to DDoS attacks. As per Corero, 39 percent of companies have ignored the risk of attacks on their network, leaving themselves vulnerable to data breaches, malware, and ransomware.

In a statement issued today, Sean Newman, director of product management at Corero, comments: “Cyber-attacks against national infrastructure have the potential to inflict significant, real-life disruption and prevent access to critical services that are vital to the functioning of our economy and society. These findings suggest that many such organizations are not as cyber resilient as they should be, in the face of growing and sophisticated cyber threats.”

Newman adds, “By not detecting and investigating these short, surgical, DDoS attacks on their networks, infrastructure organizations could also be leaving their doors wide-open for malware or ransomware attacks, data theft or more serious cyber attacks.”

Under the UK government’s proposals to implement the EU’s Network and Information Systems (NIS) directive, these organizations could be liable for fines of up to £17 million, or four percent of global turnover.

David Emm, the principal security researcher at Kaspersky Lab said, “The world isn’t ready for cyber-threats against critical infrastructure – but criminals are clearly ready and able to launch attacks on these facilities. We’ve seen attempts on power grids, oil refineries, steel plants, financial infrastructure, seaports and hospitals – and these are cases where organizations have spotted attacks and acknowledged them. However, many more companies do neither, and the lack of reporting these incidents hampers risk assessment and response to the threat.”

Edgard Capdevielle, CEO of Nozomi Networks, also commented: “This report emphasizes the impact of DDoS attacks and how they are often used as a cover to distract security teams while infecting systems with malware or stealing data. Such initiatives are often the first step in “low and slow”. He further added that “In light of this information, CNI organizations should give a high priority to re-assessing their cyber-security programs, evaluate where they are in relation to government recommendations, and inform themselves about current technologies available for protection….The right approach is to both shore up defenses and be able to quickly respond when attacks do occur.”

Targeting CNI, Eldon Sprickerhoff, founder and chief security strategist at entire said, “Although cyber-security regulations will require significant effort for the companies that are affected, this new legislation by the UK government demonstrates that they understand the severity of cyber-threats in today’s digital world and the destruction they can cause, if undeterred. Even if you’re not a CNI, cyber-threats should concern you. With cyber-criminals constantly adjusting their tactics, it is imperative that companies never stop defending themselves by constantly improving and expanding their cyber-security practices. Managed detection and response and incident response planning are common ways companies can stay ahead of their attackers.”


Here are five tips to help you can stay ahead of cybercriminals: 
  • Encryption – store sensitive data that is only readable with a digital key
  • Integrity checks – regularly check for any changes to system files
  • Network monitoring – use tools to help you detect for suspicious behavior
  • Penetration testing – conduct controlled cyber-attacks on systems to test their defenses and identify vulnerabilities
  • Education – train your employees in cyber-security awareness and tightly manage access to any confidential information


 That's it. Hope you guys like it. If yes then please .. comment down below and to not forget to like follow and share our social media platforms. 

Thursday, 6 September 2018

Everything you must know about RFC or Internet Requests for Comments || tech talks group ||


Request for Comments documents has been used by the Internet community for more than 40 years as a way to define new standards and share technical information. Researchers from universities and corporations publish these documents to offer best practices and solicit feedback on Internet technologies. RFCs are managed today by a worldwide organization called the Internet Engineering Task Force.



The very first RFCs including RFC 1 were published in 1969. Although the "host software" technology discussed in RFC 1 has long since become obsolete, documents like this one offer an interesting glimpse into the early days of computer networking. Even today, the plain-text format of the RFC remains essentially the same as it has since the beginning.

Many popular computer networking technologies in their early stages of development have been documented in RFCs over the years including

Even though the basic technologies of the Internet have matured, the RFC process continues running through the IETF. Documents are drafted and progress through several stages of review before final ratification. The topics covered in RFCs are intended for highly-specialized professional and academic research audiences. Rather than Facebook-style public comment postings, comments on RFC documents are instead given through the RFC Editor site. Final standards are published at the master RFC Index.

Do Non-Engineers Need to Worry About RFCs?

Because the IETF is staffed with professional engineers, and because it tends to move very slowly, the average Internet user doesn't need to focus on reading RFCs. These standards documents are intended to support the underlying infrastructure of the Internet; unless you're a programmer dabbling in networking technologies, you're likely to never need to read them or even be familiar with their content.


However, the fact that the world's network engineers do adhere to RFC standards means that the technologies we take for granted -- Web browsing, sending and receiving email, using domain names -- are global, interoperable and seamless for consumers.

So thats it. Hope you guys like it. If yes then please .. comment down below and do not forgot to like follow and share our social media platforms. 

Wednesday, 5 September 2018

Everything you must know about the History of Linux Operating System..||tech talks group||




Linux is an operating system used to power pretty much any device you can think of.

Linux Overview

When most people think of Linux they think of a desktop operating system used by geeks and techies or a server-based operating system used to power websites.
Linux is everywhere. It is the engine behind most smart devices. The Android phone that you are using runs a Linux kernel, that smart fridge that can restock itself runs Linux. There are smart lightbulbs that can talk to each other all with the help of Linux. Even rifles used by the army-run Linux.
A modern buzz term is "the internet of things". The truth is that there really is only one operating system that powers the internet of things and that is Linux.
From a business point of view, Linux is also used on large supercomputers and it is used to run the New York Stock Exchange.​​
Linux can also, of course, be used as the desktop operating system on your netbook, laptop or desktop computer.

Operating Systems

The operating system is special software used to interact with the hardware within a computer.
If you consider a standard laptop the hardware devices that the operating system has to manage includes the CPU, the memory, the graphics processing unit, a hard drive, a keyboard, mouse, screen, USB ports, wireless network card, ethernet card, battery, backlight for a screen and USB ports.
In addition to the internal hardware, the operating system also needs to be able to interact with external devices such as printers, scanners, joypads and a wide array of USB powered devices.
The operating system has to manage all the software on the computer, making sure each application has enough memory to perform, switching processes between being active and inactive.
The operating system has to accept input from the keyboard and act upon the input to perform the wishes of the user.
Examples of operating systems include Microsoft Windows, Unix, Linux, BSD, and OSX.

Overview of GNU/Linux

A term you might hear every now and then is GNU/Linux. What is GNU/Linux and how does it differ from normal Linux?
From a desktop Linux user point of view, there is no difference.
Linux is the main engine that interacts with your computer's hardware. It is commonly known as the Linux kernel.
The GNU tools provide a method of interacting with the Linux kernel.

GNU Tools

Before providing a list of tools lets look at the sort of tools you will need to be able to interact with the Linux kernel.
First of all at the very basic level before even considering the concept of a desktop environment you will need a terminal and the terminal must accept commands which the Linux operating system will use to perform tasks.
The common shell used to interact with Linux in a terminal is a GNU tool called BASH. To get BASH onto the computer in the first place it needs to be compiled so you also need a compiler and an assembler which are also GNU tools.
In fact, GNU is responsible for a whole chain of tools which make it possible to develop programs and applications for Linux.
One of the most popular desktop environments is called GNOME which stands for GNU Network Object Model Environment. Snappy isn't it.
The most popular graphics editor is called GIMP which stands for GNU Image Manipulation Program.
The people behind the GNU project sometimes get annoyed that Linux gets all the credit when it is their tools that power it.
My view is that everyone knows who makes the engine in a Ferrari, nobody really knows who makes the leather seats, the audio player, the pedals, the door trims and every other part of the car but they are all as equally important.

The Layers That Make Up A Standard Linux Desktop

The lowest component of a computer is the hardware.
On top of the hardware sits the Linux kernel.
The Linux kernel itself has multiple levels.
At the bottom sit the device drivers and security modules used to interact with the hardware.
On the next level, you have process schedulers and memory management used for managing the programs that run on the system.
Finally, at the top, there are a series of system calls which provide methods for interacting with the Linux kernel.
Above the Linux kernel are a series of libraries which programs can use to interact with the Linux system calls.
Just below the surface are the various low-level components such as the windowing system, logging systems, and networking.
Finally, you get to the top and that is where the desktop environment and desktop applications sit.

A Desktop Environment

A desktop environment is a series of graphical tools and applications which make it easier for you to interact with your computer and basically get stuff done.
A desktop environment in its simplest form can just include a window manager and a panel. There are many levels of sophistication between the simplest and fully featured desktop environments.
For instance, the lightweight LXDE desktop environment includes a file manager, session editor, panels, launchers, a window manager, image viewer, text editor, terminal, archiving tool, network manager and music player.
The GNOME desktop environment includes all of that plus an office suite, web browser, GNOME-boxes, email client and many more applications.

So thats it. Hope you guys like it. If yes then please .. comment down below and do not forgot to like follow and share our social media platforms. 

Sunday, 2 September 2018

What is Network Application Programming Interface (Network APIs)..? || tech talks group ||

An Application Programming Interface (API) lets computer programmers access the functionality of published software modules and services. An API defines data structures and subroutine calls that can be used to extend existing applications with new features, and build entirely new applications on top of other software components. Some of these APIs specifically support network programming.

Network programming is a type of software development for applications that connect and communicate over computer networks including the Internet. Network APIs provide entry points to protocols and re-usable software libraries. Network APIs support Web browsers, Web databases, and many mobile apps. They are widely supported across many different programming languages and operating systems.



Socket Programming

Traditional network programming followed a client-server model. The primary APIs used for client-server networking were implemented in socket libraries built into operating systems. Berkeley sockets and Windows Sockets (Winsock) APIs were the two primary standards for socket programming for many years.

Remote Procedure Calls

RPC APIs extend basic network programming techniques by adding the capability for applications to invoke functions on remote devices instead of just sending messages to them. With the explosion of growth on the World Wide Web (WWW), XML-RPC emerged as one popular mechanism for RPC.

Simple Object Access Protocol (SOAP)

SOAP was developed in the late 1990s as a network protocol using XML as its message format and HyperText Transfer Protocol (HTTP) as its transport. SOAP generated a loyal following of Web services programmers and became widely used for enterprise applications.

Representational State Transfer (REST)

REST is another programming model that also supports Web services that arrived on the scene more recently. Like SOAP, REST APIs use HTTP, but instead of XML, REST applications often choose to use a Javascript Object Notation (JSON) instead. REST and SOAP differ greatly in their approaches to state management and security, both key considerations for network programmers. Mobile apps may or may not utilize network APIs, but ones that do often use REST.

The Future of APIs

Both SOAP and REST continue to be actively used for development of new Web services. Being a much newer technology than SOAP, REST is more likely to evolve and produce other offshoots of API development.

Operating systems have also evolved to support the many new Network API technologies. In modern operating systems like Windows 10, for example, sockets continue to be a core API, with HTTP and other additional support layered on top for RESTful style network programming.

As is often the case in computer fields, newer technologies tend to roll out much faster than old ones become obsolete. Look for interesting new API developments to happen especially in the areas of cloud computing and Internet of Things (IoT), where the characteristics of devices and their usage models is quite different from traditional network programming environments.

So thats it. Hope you guys like it. If yes then please .. comment down below and do not forgot to like follow and share our social media platforms. 

How to run Windows Applications on Linux using Wine..? || tech talks group ||




The goal of the Wine project is to develop a "translation layer" for Linux and other POSIX compatible operating systems that enables users to run native Microsoft Windows applications on those operating systems.

This translation layer is a software package that "emulates" the Microsoft Windows API (Application Programming Interface), but the developers emphasize that it is not an emulator in the sense that it adds an extra software layer on top of the native operating system, which would add memory and computation overhead and negatively affect performance.

Instead, Wine provides alternative DDLs (Dynamic Link Libraries) that are needed to run the applications. These are native software components that, depending on their implementation, can be just as efficient or more efficient than their Windows counterparts. That is why some MS Windows applications run faster on Linux than on Windows.

The Wine development team has made significant progress towards achieving the goal to enable users to run Windows programs on Linux. One way to measure that progress is to count the number of programs that have been tested. The Wine Application Database currently contains more than 8500 entries. Not all of them work perfectly, but most commonly used Windows Applications run quite well, such as the following software packages and games: Microsoft Office 97, 2000, 2003, and XP, Microsoft Outlook, Microsoft Internet Explorer, Microsoft Project, Microsoft Visio, Adobe Photoshop, Quicken, Quicktime, iTunes, Windows Media Player 6.4, Lotus Notes 5.0 and 6.5.1, Silkroad Online 1.x, Half-Life 2 Retail, Half-Life Counter-Strike 1.6, and Battlefield 1942 1.6.

After installing Wine, Windows applications can be installed by placing the CD in the CD drive, opening a shell window, navigating to the CD directory containing the installation executable, and entering "wine setup.exe", if setup.exe is the installation program.

When executing programs in Wine, the user can choose between the "desktop-in-a-box" mode and mixable windows. Wine supports both DirectX and OpenGL games. Support for Direct3D is limited. There is also a Wine API that allows programmers to write software that runs is source and binary compatible with Win32 code.

The project was started in 1993 with to objective to run Windows 3.1 programs on Linux. Subsequently, versions for other Unix operating systems have been developed. The original coordinator of the project, Bob Amstadt, handed the project over to Alexandre Julliard a year later. Alexandre has been leading the development efforts ever since.

So thats it. Hope you guys like it. If yes then please .. comment down below and do not forgot to like follow and share our social media platforms.